00:00:00.000 Started by upstream project "autotest-per-patch" build number 132068 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.061 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.099 Using shallow fetch with depth 1 00:00:00.099 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.099 > git --version # timeout=10 00:00:00.160 > git --version # 'git version 2.39.2' 00:00:00.160 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.212 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.212 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.318 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.332 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.346 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.346 > git config core.sparsecheckout # timeout=10 00:00:03.357 > git read-tree -mu HEAD # timeout=10 00:00:03.372 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.390 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.390 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.473 [Pipeline] Start of Pipeline 00:00:03.485 [Pipeline] library 00:00:03.486 Loading library shm_lib@master 00:00:03.487 Library shm_lib@master is cached. Copying from home. 00:00:03.502 [Pipeline] node 00:00:03.518 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.520 [Pipeline] { 00:00:03.531 [Pipeline] catchError 00:00:03.532 [Pipeline] { 00:00:03.544 [Pipeline] wrap 00:00:03.552 [Pipeline] { 00:00:03.559 [Pipeline] stage 00:00:03.560 [Pipeline] { (Prologue) 00:00:03.793 [Pipeline] sh 00:00:04.084 + logger -p user.info -t JENKINS-CI 00:00:04.103 [Pipeline] echo 00:00:04.104 Node: WFP39 00:00:04.110 [Pipeline] sh 00:00:04.413 [Pipeline] setCustomBuildProperty 00:00:04.423 [Pipeline] echo 00:00:04.424 Cleanup processes 00:00:04.430 [Pipeline] sh 00:00:04.716 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.716 3391376 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.727 [Pipeline] sh 00:00:05.007 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:05.007 ++ grep -v 'sudo pgrep' 00:00:05.007 ++ awk '{print $1}' 00:00:05.007 + sudo kill -9 00:00:05.007 + true 00:00:05.021 [Pipeline] cleanWs 00:00:05.030 [WS-CLEANUP] Deleting project workspace... 00:00:05.030 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.037 [WS-CLEANUP] done 00:00:05.041 [Pipeline] setCustomBuildProperty 00:00:05.055 [Pipeline] sh 00:00:05.336 + sudo git config --global --replace-all safe.directory '*' 00:00:05.404 [Pipeline] httpRequest 00:00:06.106 [Pipeline] echo 00:00:06.107 Sorcerer 10.211.164.101 is alive 00:00:06.114 [Pipeline] retry 00:00:06.115 [Pipeline] { 00:00:06.126 [Pipeline] httpRequest 00:00:06.130 HttpMethod: GET 00:00:06.130 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.131 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.153 Response Code: HTTP/1.1 200 OK 00:00:06.154 Success: Status code 200 is in the accepted range: 200,404 00:00:06.154 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:34.578 [Pipeline] } 00:00:34.595 [Pipeline] // retry 00:00:34.600 [Pipeline] sh 00:00:34.880 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:35.153 [Pipeline] httpRequest 00:00:36.221 [Pipeline] echo 00:00:36.223 Sorcerer 10.211.164.101 is alive 00:00:36.233 [Pipeline] retry 00:00:36.235 [Pipeline] { 00:00:36.249 [Pipeline] httpRequest 00:00:36.253 HttpMethod: GET 00:00:36.254 URL: http://10.211.164.101/packages/spdk_4c618f461635191bcb6cf058b15ba397c88a2b60.tar.gz 00:00:36.254 Sending request to url: http://10.211.164.101/packages/spdk_4c618f461635191bcb6cf058b15ba397c88a2b60.tar.gz 00:00:36.278 Response Code: HTTP/1.1 200 OK 00:00:36.279 Success: Status code 200 is in the accepted range: 200,404 00:00:36.279 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_4c618f461635191bcb6cf058b15ba397c88a2b60.tar.gz 00:07:14.333 [Pipeline] } 00:07:14.350 [Pipeline] // retry 00:07:14.357 [Pipeline] sh 00:07:14.643 + tar --no-same-owner -xf spdk_4c618f461635191bcb6cf058b15ba397c88a2b60.tar.gz 00:07:18.844 [Pipeline] sh 00:07:19.125 + git -C spdk log --oneline -n5 00:07:19.125 4c618f461 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:07:19.125 a51629061 test/nvmf: Remove all transport conditions from the test suites 00:07:19.125 9f70a047a test/nvmf: Drop $RDMA_IP_LIST 00:07:19.125 dbbc706e0 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:07:19.125 ea915c2d7 test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:07:19.136 [Pipeline] } 00:07:19.149 [Pipeline] // stage 00:07:19.158 [Pipeline] stage 00:07:19.160 [Pipeline] { (Prepare) 00:07:19.176 [Pipeline] writeFile 00:07:19.191 [Pipeline] sh 00:07:19.474 + logger -p user.info -t JENKINS-CI 00:07:19.486 [Pipeline] sh 00:07:19.768 + logger -p user.info -t JENKINS-CI 00:07:19.780 [Pipeline] sh 00:07:20.065 + cat autorun-spdk.conf 00:07:20.065 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:20.065 SPDK_TEST_FUZZER_SHORT=1 00:07:20.065 SPDK_TEST_FUZZER=1 00:07:20.065 SPDK_TEST_SETUP=1 00:07:20.065 SPDK_RUN_UBSAN=1 00:07:20.072 RUN_NIGHTLY=0 00:07:20.076 [Pipeline] readFile 00:07:20.100 [Pipeline] withEnv 00:07:20.102 [Pipeline] { 00:07:20.113 [Pipeline] sh 00:07:20.397 + set -ex 00:07:20.397 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:07:20.397 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:07:20.397 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:20.397 ++ SPDK_TEST_FUZZER_SHORT=1 00:07:20.397 ++ SPDK_TEST_FUZZER=1 00:07:20.397 ++ SPDK_TEST_SETUP=1 00:07:20.397 ++ SPDK_RUN_UBSAN=1 00:07:20.397 ++ RUN_NIGHTLY=0 00:07:20.397 + case $SPDK_TEST_NVMF_NICS in 00:07:20.397 + DRIVERS= 00:07:20.397 + [[ -n '' ]] 00:07:20.397 + exit 0 00:07:20.406 [Pipeline] } 00:07:20.420 [Pipeline] // withEnv 00:07:20.425 [Pipeline] } 00:07:20.437 [Pipeline] // stage 00:07:20.445 [Pipeline] catchError 00:07:20.446 [Pipeline] { 00:07:20.459 [Pipeline] timeout 00:07:20.459 Timeout set to expire in 30 min 00:07:20.461 [Pipeline] { 00:07:20.475 [Pipeline] stage 00:07:20.477 [Pipeline] { (Tests) 00:07:20.490 [Pipeline] sh 00:07:20.773 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:20.773 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:20.773 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:07:20.773 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:07:20.773 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:20.773 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:07:20.773 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:07:20.773 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:07:20.773 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:07:20.773 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:07:20.773 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:07:20.773 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:07:20.773 + source /etc/os-release 00:07:20.773 ++ NAME='Fedora Linux' 00:07:20.773 ++ VERSION='39 (Cloud Edition)' 00:07:20.773 ++ ID=fedora 00:07:20.773 ++ VERSION_ID=39 00:07:20.773 ++ VERSION_CODENAME= 00:07:20.773 ++ PLATFORM_ID=platform:f39 00:07:20.773 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:07:20.773 ++ ANSI_COLOR='0;38;2;60;110;180' 00:07:20.773 ++ LOGO=fedora-logo-icon 00:07:20.773 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:07:20.773 ++ HOME_URL=https://fedoraproject.org/ 00:07:20.773 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:07:20.773 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:07:20.773 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:07:20.773 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:07:20.773 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:07:20.773 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:07:20.773 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:07:20.773 ++ SUPPORT_END=2024-11-12 00:07:20.773 ++ VARIANT='Cloud Edition' 00:07:20.773 ++ VARIANT_ID=cloud 00:07:20.773 + uname -a 00:07:20.773 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:07:20.773 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:07:24.092 Hugepages 00:07:24.092 node hugesize free / total 00:07:24.092 node0 1048576kB 0 / 0 00:07:24.092 node0 2048kB 0 / 0 00:07:24.092 node1 1048576kB 0 / 0 00:07:24.092 node1 2048kB 0 / 0 00:07:24.092 00:07:24.092 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:24.092 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:24.092 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:24.093 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:24.093 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:24.093 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:24.093 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:24.093 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:24.093 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:24.351 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:24.351 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:24.351 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:24.351 + rm -f /tmp/spdk-ld-path 00:07:24.351 + source autorun-spdk.conf 00:07:24.351 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:24.351 ++ SPDK_TEST_FUZZER_SHORT=1 00:07:24.351 ++ SPDK_TEST_FUZZER=1 00:07:24.351 ++ SPDK_TEST_SETUP=1 00:07:24.351 ++ SPDK_RUN_UBSAN=1 00:07:24.351 ++ RUN_NIGHTLY=0 00:07:24.351 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:07:24.351 + [[ -n '' ]] 00:07:24.351 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:24.351 + for M in /var/spdk/build-*-manifest.txt 00:07:24.351 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:07:24.351 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:07:24.351 + for M in /var/spdk/build-*-manifest.txt 00:07:24.351 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:07:24.351 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:07:24.351 + for M in /var/spdk/build-*-manifest.txt 00:07:24.351 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:07:24.351 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:07:24.351 ++ uname 00:07:24.351 + [[ Linux == \L\i\n\u\x ]] 00:07:24.351 + sudo dmesg -T 00:07:24.609 + sudo dmesg --clear 00:07:24.609 + dmesg_pid=3393519 00:07:24.609 + sudo dmesg -Tw 00:07:24.609 + [[ Fedora Linux == FreeBSD ]] 00:07:24.609 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:24.609 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:24.609 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:07:24.609 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:07:24.609 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:07:24.609 + [[ -x /usr/src/fio-static/fio ]] 00:07:24.609 + export FIO_BIN=/usr/src/fio-static/fio 00:07:24.609 + FIO_BIN=/usr/src/fio-static/fio 00:07:24.609 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:07:24.609 + [[ ! -v VFIO_QEMU_BIN ]] 00:07:24.609 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:07:24.609 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:24.609 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:24.609 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:07:24.609 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:24.609 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:24.609 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:07:24.609 16:31:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:07:24.609 16:31:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:07:24.609 16:31:29 -- short-fuzz-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:24.609 16:31:29 -- short-fuzz-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_FUZZER_SHORT=1 00:07:24.609 16:31:29 -- short-fuzz-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_FUZZER=1 00:07:24.609 16:31:29 -- short-fuzz-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_SETUP=1 00:07:24.609 16:31:29 -- short-fuzz-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:07:24.609 16:31:29 -- short-fuzz-phy-autotest/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:07:24.609 16:31:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:07:24.609 16:31:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:07:24.609 16:31:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:07:24.609 16:31:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:24.609 16:31:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:07:24.609 16:31:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:07:24.609 16:31:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.609 16:31:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.609 16:31:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.609 16:31:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.609 16:31:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.609 16:31:29 -- paths/export.sh@5 -- $ export PATH 00:07:24.609 16:31:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.609 16:31:29 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:07:24.609 16:31:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:07:24.609 16:31:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730820689.XXXXXX 00:07:24.609 16:31:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730820689.RBshnK 00:07:24.609 16:31:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:07:24.609 16:31:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:07:24.609 16:31:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:07:24.609 16:31:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:07:24.609 16:31:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:07:24.609 16:31:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:07:24.609 16:31:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:07:24.609 16:31:29 -- common/autotest_common.sh@10 -- $ set +x 00:07:24.609 16:31:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:07:24.609 16:31:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:07:24.609 16:31:29 -- pm/common@17 -- $ local monitor 00:07:24.609 16:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:24.609 16:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:24.609 16:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:24.609 16:31:29 -- pm/common@21 -- $ date +%s 00:07:24.609 16:31:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:24.609 16:31:29 -- pm/common@21 -- $ date +%s 00:07:24.610 16:31:29 -- pm/common@25 -- $ sleep 1 00:07:24.610 16:31:29 -- pm/common@21 -- $ date +%s 00:07:24.610 16:31:29 -- pm/common@21 -- $ date +%s 00:07:24.610 16:31:29 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820689 00:07:24.867 16:31:29 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820689 00:07:24.867 16:31:29 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820689 00:07:24.867 16:31:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730820689 00:07:24.867 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820689_collect-cpu-load.pm.log 00:07:24.867 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820689_collect-cpu-temp.pm.log 00:07:24.867 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820689_collect-vmstat.pm.log 00:07:24.867 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730820689_collect-bmc-pm.bmc.pm.log 00:07:25.802 16:31:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:07:25.802 16:31:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:07:25.802 16:31:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:07:25.802 16:31:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:25.802 16:31:30 -- spdk/autobuild.sh@16 -- $ date -u 00:07:25.802 Tue Nov 5 03:31:30 PM UTC 2024 00:07:25.802 16:31:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:07:25.802 v25.01-pre-164-g4c618f461 00:07:25.802 16:31:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:07:25.802 16:31:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:07:25.802 16:31:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:07:25.802 16:31:30 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:07:25.802 16:31:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:25.802 16:31:30 -- common/autotest_common.sh@10 -- $ set +x 00:07:25.802 ************************************ 00:07:25.802 START TEST ubsan 00:07:25.802 ************************************ 00:07:25.802 16:31:30 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:07:25.802 using ubsan 00:07:25.802 00:07:25.802 real 0m0.001s 00:07:25.802 user 0m0.001s 00:07:25.802 sys 0m0.000s 00:07:25.802 16:31:30 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:25.802 16:31:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:07:25.802 ************************************ 00:07:25.802 END TEST ubsan 00:07:25.802 ************************************ 00:07:25.802 16:31:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:07:25.802 16:31:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:07:25.802 16:31:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:07:25.802 16:31:30 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:07:25.802 16:31:30 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:07:25.802 16:31:30 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:07:25.802 16:31:30 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:07:25.802 16:31:30 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:25.802 16:31:30 -- common/autotest_common.sh@10 -- $ set +x 00:07:25.802 ************************************ 00:07:25.802 START TEST autobuild_llvm_precompile 00:07:25.802 ************************************ 00:07:25.802 16:31:30 autobuild_llvm_precompile -- common/autotest_common.sh@1127 -- $ _llvm_precompile 00:07:25.802 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:07:25.803 Target: x86_64-redhat-linux-gnu 00:07:25.803 Thread model: posix 00:07:25.803 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:07:25.803 16:31:30 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:26.061 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:26.061 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:26.626 Using 'verbs' RDMA provider 00:07:42.424 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:07:57.292 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:07:57.292 Creating mk/config.mk...done. 00:07:57.292 Creating mk/cc.flags.mk...done. 00:07:57.292 Type 'make' to build. 00:07:57.292 00:07:57.292 real 0m29.927s 00:07:57.292 user 0m14.067s 00:07:57.292 sys 0m14.815s 00:07:57.292 16:32:00 autobuild_llvm_precompile -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:07:57.292 16:32:00 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:07:57.292 ************************************ 00:07:57.292 END TEST autobuild_llvm_precompile 00:07:57.292 ************************************ 00:07:57.292 16:32:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:07:57.292 16:32:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:07:57.292 16:32:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:07:57.292 16:32:00 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:07:57.292 16:32:00 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:57.292 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:57.292 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:57.292 Using 'verbs' RDMA provider 00:08:09.755 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:08:21.995 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:08:21.995 Creating mk/config.mk...done. 00:08:21.995 Creating mk/cc.flags.mk...done. 00:08:21.995 Type 'make' to build. 00:08:21.995 16:32:26 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:08:21.995 16:32:26 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:08:21.995 16:32:26 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:08:21.995 16:32:26 -- common/autotest_common.sh@10 -- $ set +x 00:08:21.995 ************************************ 00:08:21.995 START TEST make 00:08:21.995 ************************************ 00:08:21.995 16:32:26 make -- common/autotest_common.sh@1127 -- $ make -j72 00:08:22.561 make[1]: Nothing to be done for 'all'. 00:08:24.474 The Meson build system 00:08:24.474 Version: 1.5.0 00:08:24.474 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:08:24.474 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:24.474 Build type: native build 00:08:24.474 Project name: libvfio-user 00:08:24.474 Project version: 0.0.1 00:08:24.474 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:08:24.474 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:08:24.474 Host machine cpu family: x86_64 00:08:24.474 Host machine cpu: x86_64 00:08:24.474 Run-time dependency threads found: YES 00:08:24.474 Library dl found: YES 00:08:24.474 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:24.474 Run-time dependency json-c found: YES 0.17 00:08:24.474 Run-time dependency cmocka found: YES 1.1.7 00:08:24.474 Program pytest-3 found: NO 00:08:24.474 Program flake8 found: NO 00:08:24.474 Program misspell-fixer found: NO 00:08:24.474 Program restructuredtext-lint found: NO 00:08:24.474 Program valgrind found: YES (/usr/bin/valgrind) 00:08:24.474 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:24.474 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:24.474 Compiler for C supports arguments -Wwrite-strings: YES 00:08:24.474 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:08:24.474 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:08:24.474 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:08:24.474 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:08:24.474 Build targets in project: 8 00:08:24.474 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:08:24.474 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:08:24.474 00:08:24.474 libvfio-user 0.0.1 00:08:24.474 00:08:24.474 User defined options 00:08:24.474 buildtype : debug 00:08:24.474 default_library: static 00:08:24.474 libdir : /usr/local/lib 00:08:24.474 00:08:24.474 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:24.731 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:08:24.731 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:08:24.731 [2/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:08:24.731 [3/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:08:24.731 [4/36] Compiling C object samples/null.p/null.c.o 00:08:24.731 [5/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:08:24.731 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:08:24.731 [7/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:08:24.731 [8/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:08:24.731 [9/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:08:24.731 [10/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:08:24.731 [11/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:08:24.731 [12/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:08:24.731 [13/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:08:24.731 [14/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:08:24.731 [15/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:08:24.731 [16/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:08:24.989 [17/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:08:24.989 [18/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:08:24.989 [19/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:08:24.989 [20/36] Compiling C object samples/server.p/server.c.o 00:08:24.989 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:08:24.989 [22/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:08:24.989 [23/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:08:24.989 [24/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:08:24.989 [25/36] Compiling C object test/unit_tests.p/mocks.c.o 00:08:24.989 [26/36] Compiling C object samples/client.p/client.c.o 00:08:24.989 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:08:24.989 [28/36] Linking static target lib/libvfio-user.a 00:08:24.989 [29/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:08:24.989 [30/36] Linking target samples/client 00:08:24.989 [31/36] Linking target samples/lspci 00:08:24.989 [32/36] Linking target samples/server 00:08:24.989 [33/36] Linking target samples/null 00:08:24.989 [34/36] Linking target samples/shadow_ioeventfd_server 00:08:24.989 [35/36] Linking target samples/gpio-pci-idio-16 00:08:24.989 [36/36] Linking target test/unit_tests 00:08:24.989 INFO: autodetecting backend as ninja 00:08:24.989 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:24.989 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:08:25.551 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:08:25.551 ninja: no work to do. 00:08:32.108 The Meson build system 00:08:32.108 Version: 1.5.0 00:08:32.108 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:08:32.108 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:08:32.108 Build type: native build 00:08:32.108 Program cat found: YES (/usr/bin/cat) 00:08:32.108 Project name: DPDK 00:08:32.108 Project version: 24.03.0 00:08:32.108 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:08:32.108 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:08:32.108 Host machine cpu family: x86_64 00:08:32.108 Host machine cpu: x86_64 00:08:32.108 Message: ## Building in Developer Mode ## 00:08:32.108 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:32.108 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:08:32.108 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:08:32.108 Program python3 found: YES (/usr/bin/python3) 00:08:32.108 Program cat found: YES (/usr/bin/cat) 00:08:32.108 Compiler for C supports arguments -march=native: YES 00:08:32.108 Checking for size of "void *" : 8 00:08:32.108 Checking for size of "void *" : 8 (cached) 00:08:32.108 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:08:32.108 Library m found: YES 00:08:32.108 Library numa found: YES 00:08:32.108 Has header "numaif.h" : YES 00:08:32.108 Library fdt found: NO 00:08:32.108 Library execinfo found: NO 00:08:32.108 Has header "execinfo.h" : YES 00:08:32.108 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:08:32.108 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:32.108 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:32.108 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:32.108 Run-time dependency openssl found: YES 3.1.1 00:08:32.108 Run-time dependency libpcap found: YES 1.10.4 00:08:32.108 Has header "pcap.h" with dependency libpcap: YES 00:08:32.108 Compiler for C supports arguments -Wcast-qual: YES 00:08:32.108 Compiler for C supports arguments -Wdeprecated: YES 00:08:32.108 Compiler for C supports arguments -Wformat: YES 00:08:32.108 Compiler for C supports arguments -Wformat-nonliteral: YES 00:08:32.108 Compiler for C supports arguments -Wformat-security: YES 00:08:32.108 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:32.108 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:32.108 Compiler for C supports arguments -Wnested-externs: YES 00:08:32.108 Compiler for C supports arguments -Wold-style-definition: YES 00:08:32.108 Compiler for C supports arguments -Wpointer-arith: YES 00:08:32.108 Compiler for C supports arguments -Wsign-compare: YES 00:08:32.108 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:32.108 Compiler for C supports arguments -Wundef: YES 00:08:32.108 Compiler for C supports arguments -Wwrite-strings: YES 00:08:32.108 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:32.108 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:08:32.108 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:32.108 Program objdump found: YES (/usr/bin/objdump) 00:08:32.108 Compiler for C supports arguments -mavx512f: YES 00:08:32.108 Checking if "AVX512 checking" compiles: YES 00:08:32.108 Fetching value of define "__SSE4_2__" : 1 00:08:32.108 Fetching value of define "__AES__" : 1 00:08:32.108 Fetching value of define "__AVX__" : 1 00:08:32.108 Fetching value of define "__AVX2__" : 1 00:08:32.108 Fetching value of define "__AVX512BW__" : 1 00:08:32.108 Fetching value of define "__AVX512CD__" : 1 00:08:32.108 Fetching value of define "__AVX512DQ__" : 1 00:08:32.108 Fetching value of define "__AVX512F__" : 1 00:08:32.108 Fetching value of define "__AVX512VL__" : 1 00:08:32.108 Fetching value of define "__PCLMUL__" : 1 00:08:32.108 Fetching value of define "__RDRND__" : 1 00:08:32.108 Fetching value of define "__RDSEED__" : 1 00:08:32.108 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:32.108 Fetching value of define "__znver1__" : (undefined) 00:08:32.108 Fetching value of define "__znver2__" : (undefined) 00:08:32.108 Fetching value of define "__znver3__" : (undefined) 00:08:32.108 Fetching value of define "__znver4__" : (undefined) 00:08:32.108 Compiler for C supports arguments -Wno-format-truncation: NO 00:08:32.108 Message: lib/log: Defining dependency "log" 00:08:32.108 Message: lib/kvargs: Defining dependency "kvargs" 00:08:32.108 Message: lib/telemetry: Defining dependency "telemetry" 00:08:32.108 Checking for function "getentropy" : NO 00:08:32.108 Message: lib/eal: Defining dependency "eal" 00:08:32.108 Message: lib/ring: Defining dependency "ring" 00:08:32.108 Message: lib/rcu: Defining dependency "rcu" 00:08:32.108 Message: lib/mempool: Defining dependency "mempool" 00:08:32.108 Message: lib/mbuf: Defining dependency "mbuf" 00:08:32.108 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:32.108 Fetching value of define "__AVX512F__" : 1 (cached) 00:08:32.108 Fetching value of define "__AVX512BW__" : 1 (cached) 00:08:32.108 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:08:32.108 Fetching value of define "__AVX512VL__" : 1 (cached) 00:08:32.108 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:08:32.108 Compiler for C supports arguments -mpclmul: YES 00:08:32.108 Compiler for C supports arguments -maes: YES 00:08:32.108 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:32.108 Compiler for C supports arguments -mavx512bw: YES 00:08:32.108 Compiler for C supports arguments -mavx512dq: YES 00:08:32.108 Compiler for C supports arguments -mavx512vl: YES 00:08:32.108 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:32.108 Compiler for C supports arguments -mavx2: YES 00:08:32.108 Compiler for C supports arguments -mavx: YES 00:08:32.108 Message: lib/net: Defining dependency "net" 00:08:32.108 Message: lib/meter: Defining dependency "meter" 00:08:32.108 Message: lib/ethdev: Defining dependency "ethdev" 00:08:32.109 Message: lib/pci: Defining dependency "pci" 00:08:32.109 Message: lib/cmdline: Defining dependency "cmdline" 00:08:32.109 Message: lib/hash: Defining dependency "hash" 00:08:32.109 Message: lib/timer: Defining dependency "timer" 00:08:32.109 Message: lib/compressdev: Defining dependency "compressdev" 00:08:32.109 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:32.109 Message: lib/dmadev: Defining dependency "dmadev" 00:08:32.109 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:32.109 Message: lib/power: Defining dependency "power" 00:08:32.109 Message: lib/reorder: Defining dependency "reorder" 00:08:32.109 Message: lib/security: Defining dependency "security" 00:08:32.109 Has header "linux/userfaultfd.h" : YES 00:08:32.109 Has header "linux/vduse.h" : YES 00:08:32.109 Message: lib/vhost: Defining dependency "vhost" 00:08:32.109 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:08:32.109 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:32.109 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:32.109 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:32.109 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:08:32.109 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:08:32.109 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:08:32.109 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:08:32.109 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:08:32.109 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:08:32.109 Program doxygen found: YES (/usr/local/bin/doxygen) 00:08:32.109 Configuring doxy-api-html.conf using configuration 00:08:32.109 Configuring doxy-api-man.conf using configuration 00:08:32.109 Program mandb found: YES (/usr/bin/mandb) 00:08:32.109 Program sphinx-build found: NO 00:08:32.109 Configuring rte_build_config.h using configuration 00:08:32.109 Message: 00:08:32.109 ================= 00:08:32.109 Applications Enabled 00:08:32.109 ================= 00:08:32.109 00:08:32.109 apps: 00:08:32.109 00:08:32.109 00:08:32.109 Message: 00:08:32.109 ================= 00:08:32.109 Libraries Enabled 00:08:32.109 ================= 00:08:32.109 00:08:32.109 libs: 00:08:32.109 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:08:32.109 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:08:32.109 cryptodev, dmadev, power, reorder, security, vhost, 00:08:32.109 00:08:32.109 Message: 00:08:32.109 =============== 00:08:32.109 Drivers Enabled 00:08:32.109 =============== 00:08:32.109 00:08:32.109 common: 00:08:32.109 00:08:32.109 bus: 00:08:32.109 pci, vdev, 00:08:32.109 mempool: 00:08:32.109 ring, 00:08:32.109 dma: 00:08:32.109 00:08:32.109 net: 00:08:32.109 00:08:32.109 crypto: 00:08:32.109 00:08:32.109 compress: 00:08:32.109 00:08:32.109 vdpa: 00:08:32.109 00:08:32.109 00:08:32.109 Message: 00:08:32.109 ================= 00:08:32.109 Content Skipped 00:08:32.109 ================= 00:08:32.109 00:08:32.109 apps: 00:08:32.109 dumpcap: explicitly disabled via build config 00:08:32.109 graph: explicitly disabled via build config 00:08:32.109 pdump: explicitly disabled via build config 00:08:32.109 proc-info: explicitly disabled via build config 00:08:32.109 test-acl: explicitly disabled via build config 00:08:32.109 test-bbdev: explicitly disabled via build config 00:08:32.109 test-cmdline: explicitly disabled via build config 00:08:32.109 test-compress-perf: explicitly disabled via build config 00:08:32.109 test-crypto-perf: explicitly disabled via build config 00:08:32.109 test-dma-perf: explicitly disabled via build config 00:08:32.109 test-eventdev: explicitly disabled via build config 00:08:32.109 test-fib: explicitly disabled via build config 00:08:32.109 test-flow-perf: explicitly disabled via build config 00:08:32.109 test-gpudev: explicitly disabled via build config 00:08:32.109 test-mldev: explicitly disabled via build config 00:08:32.109 test-pipeline: explicitly disabled via build config 00:08:32.109 test-pmd: explicitly disabled via build config 00:08:32.109 test-regex: explicitly disabled via build config 00:08:32.109 test-sad: explicitly disabled via build config 00:08:32.109 test-security-perf: explicitly disabled via build config 00:08:32.109 00:08:32.109 libs: 00:08:32.109 argparse: explicitly disabled via build config 00:08:32.109 metrics: explicitly disabled via build config 00:08:32.109 acl: explicitly disabled via build config 00:08:32.109 bbdev: explicitly disabled via build config 00:08:32.109 bitratestats: explicitly disabled via build config 00:08:32.109 bpf: explicitly disabled via build config 00:08:32.109 cfgfile: explicitly disabled via build config 00:08:32.109 distributor: explicitly disabled via build config 00:08:32.109 efd: explicitly disabled via build config 00:08:32.109 eventdev: explicitly disabled via build config 00:08:32.109 dispatcher: explicitly disabled via build config 00:08:32.109 gpudev: explicitly disabled via build config 00:08:32.109 gro: explicitly disabled via build config 00:08:32.109 gso: explicitly disabled via build config 00:08:32.109 ip_frag: explicitly disabled via build config 00:08:32.109 jobstats: explicitly disabled via build config 00:08:32.109 latencystats: explicitly disabled via build config 00:08:32.109 lpm: explicitly disabled via build config 00:08:32.109 member: explicitly disabled via build config 00:08:32.109 pcapng: explicitly disabled via build config 00:08:32.109 rawdev: explicitly disabled via build config 00:08:32.109 regexdev: explicitly disabled via build config 00:08:32.109 mldev: explicitly disabled via build config 00:08:32.109 rib: explicitly disabled via build config 00:08:32.109 sched: explicitly disabled via build config 00:08:32.109 stack: explicitly disabled via build config 00:08:32.109 ipsec: explicitly disabled via build config 00:08:32.109 pdcp: explicitly disabled via build config 00:08:32.109 fib: explicitly disabled via build config 00:08:32.109 port: explicitly disabled via build config 00:08:32.109 pdump: explicitly disabled via build config 00:08:32.109 table: explicitly disabled via build config 00:08:32.109 pipeline: explicitly disabled via build config 00:08:32.109 graph: explicitly disabled via build config 00:08:32.109 node: explicitly disabled via build config 00:08:32.109 00:08:32.109 drivers: 00:08:32.109 common/cpt: not in enabled drivers build config 00:08:32.109 common/dpaax: not in enabled drivers build config 00:08:32.109 common/iavf: not in enabled drivers build config 00:08:32.109 common/idpf: not in enabled drivers build config 00:08:32.109 common/ionic: not in enabled drivers build config 00:08:32.109 common/mvep: not in enabled drivers build config 00:08:32.109 common/octeontx: not in enabled drivers build config 00:08:32.109 bus/auxiliary: not in enabled drivers build config 00:08:32.109 bus/cdx: not in enabled drivers build config 00:08:32.109 bus/dpaa: not in enabled drivers build config 00:08:32.109 bus/fslmc: not in enabled drivers build config 00:08:32.109 bus/ifpga: not in enabled drivers build config 00:08:32.109 bus/platform: not in enabled drivers build config 00:08:32.109 bus/uacce: not in enabled drivers build config 00:08:32.109 bus/vmbus: not in enabled drivers build config 00:08:32.109 common/cnxk: not in enabled drivers build config 00:08:32.109 common/mlx5: not in enabled drivers build config 00:08:32.109 common/nfp: not in enabled drivers build config 00:08:32.109 common/nitrox: not in enabled drivers build config 00:08:32.109 common/qat: not in enabled drivers build config 00:08:32.109 common/sfc_efx: not in enabled drivers build config 00:08:32.109 mempool/bucket: not in enabled drivers build config 00:08:32.109 mempool/cnxk: not in enabled drivers build config 00:08:32.109 mempool/dpaa: not in enabled drivers build config 00:08:32.109 mempool/dpaa2: not in enabled drivers build config 00:08:32.109 mempool/octeontx: not in enabled drivers build config 00:08:32.109 mempool/stack: not in enabled drivers build config 00:08:32.109 dma/cnxk: not in enabled drivers build config 00:08:32.109 dma/dpaa: not in enabled drivers build config 00:08:32.109 dma/dpaa2: not in enabled drivers build config 00:08:32.109 dma/hisilicon: not in enabled drivers build config 00:08:32.109 dma/idxd: not in enabled drivers build config 00:08:32.109 dma/ioat: not in enabled drivers build config 00:08:32.109 dma/skeleton: not in enabled drivers build config 00:08:32.109 net/af_packet: not in enabled drivers build config 00:08:32.109 net/af_xdp: not in enabled drivers build config 00:08:32.109 net/ark: not in enabled drivers build config 00:08:32.109 net/atlantic: not in enabled drivers build config 00:08:32.109 net/avp: not in enabled drivers build config 00:08:32.109 net/axgbe: not in enabled drivers build config 00:08:32.109 net/bnx2x: not in enabled drivers build config 00:08:32.109 net/bnxt: not in enabled drivers build config 00:08:32.109 net/bonding: not in enabled drivers build config 00:08:32.109 net/cnxk: not in enabled drivers build config 00:08:32.109 net/cpfl: not in enabled drivers build config 00:08:32.109 net/cxgbe: not in enabled drivers build config 00:08:32.109 net/dpaa: not in enabled drivers build config 00:08:32.109 net/dpaa2: not in enabled drivers build config 00:08:32.109 net/e1000: not in enabled drivers build config 00:08:32.109 net/ena: not in enabled drivers build config 00:08:32.109 net/enetc: not in enabled drivers build config 00:08:32.109 net/enetfec: not in enabled drivers build config 00:08:32.109 net/enic: not in enabled drivers build config 00:08:32.109 net/failsafe: not in enabled drivers build config 00:08:32.109 net/fm10k: not in enabled drivers build config 00:08:32.109 net/gve: not in enabled drivers build config 00:08:32.109 net/hinic: not in enabled drivers build config 00:08:32.109 net/hns3: not in enabled drivers build config 00:08:32.109 net/i40e: not in enabled drivers build config 00:08:32.109 net/iavf: not in enabled drivers build config 00:08:32.109 net/ice: not in enabled drivers build config 00:08:32.109 net/idpf: not in enabled drivers build config 00:08:32.109 net/igc: not in enabled drivers build config 00:08:32.109 net/ionic: not in enabled drivers build config 00:08:32.109 net/ipn3ke: not in enabled drivers build config 00:08:32.109 net/ixgbe: not in enabled drivers build config 00:08:32.109 net/mana: not in enabled drivers build config 00:08:32.109 net/memif: not in enabled drivers build config 00:08:32.110 net/mlx4: not in enabled drivers build config 00:08:32.110 net/mlx5: not in enabled drivers build config 00:08:32.110 net/mvneta: not in enabled drivers build config 00:08:32.110 net/mvpp2: not in enabled drivers build config 00:08:32.110 net/netvsc: not in enabled drivers build config 00:08:32.110 net/nfb: not in enabled drivers build config 00:08:32.110 net/nfp: not in enabled drivers build config 00:08:32.110 net/ngbe: not in enabled drivers build config 00:08:32.110 net/null: not in enabled drivers build config 00:08:32.110 net/octeontx: not in enabled drivers build config 00:08:32.110 net/octeon_ep: not in enabled drivers build config 00:08:32.110 net/pcap: not in enabled drivers build config 00:08:32.110 net/pfe: not in enabled drivers build config 00:08:32.110 net/qede: not in enabled drivers build config 00:08:32.110 net/ring: not in enabled drivers build config 00:08:32.110 net/sfc: not in enabled drivers build config 00:08:32.110 net/softnic: not in enabled drivers build config 00:08:32.110 net/tap: not in enabled drivers build config 00:08:32.110 net/thunderx: not in enabled drivers build config 00:08:32.110 net/txgbe: not in enabled drivers build config 00:08:32.110 net/vdev_netvsc: not in enabled drivers build config 00:08:32.110 net/vhost: not in enabled drivers build config 00:08:32.110 net/virtio: not in enabled drivers build config 00:08:32.110 net/vmxnet3: not in enabled drivers build config 00:08:32.110 raw/*: missing internal dependency, "rawdev" 00:08:32.110 crypto/armv8: not in enabled drivers build config 00:08:32.110 crypto/bcmfs: not in enabled drivers build config 00:08:32.110 crypto/caam_jr: not in enabled drivers build config 00:08:32.110 crypto/ccp: not in enabled drivers build config 00:08:32.110 crypto/cnxk: not in enabled drivers build config 00:08:32.110 crypto/dpaa_sec: not in enabled drivers build config 00:08:32.110 crypto/dpaa2_sec: not in enabled drivers build config 00:08:32.110 crypto/ipsec_mb: not in enabled drivers build config 00:08:32.110 crypto/mlx5: not in enabled drivers build config 00:08:32.110 crypto/mvsam: not in enabled drivers build config 00:08:32.110 crypto/nitrox: not in enabled drivers build config 00:08:32.110 crypto/null: not in enabled drivers build config 00:08:32.110 crypto/octeontx: not in enabled drivers build config 00:08:32.110 crypto/openssl: not in enabled drivers build config 00:08:32.110 crypto/scheduler: not in enabled drivers build config 00:08:32.110 crypto/uadk: not in enabled drivers build config 00:08:32.110 crypto/virtio: not in enabled drivers build config 00:08:32.110 compress/isal: not in enabled drivers build config 00:08:32.110 compress/mlx5: not in enabled drivers build config 00:08:32.110 compress/nitrox: not in enabled drivers build config 00:08:32.110 compress/octeontx: not in enabled drivers build config 00:08:32.110 compress/zlib: not in enabled drivers build config 00:08:32.110 regex/*: missing internal dependency, "regexdev" 00:08:32.110 ml/*: missing internal dependency, "mldev" 00:08:32.110 vdpa/ifc: not in enabled drivers build config 00:08:32.110 vdpa/mlx5: not in enabled drivers build config 00:08:32.110 vdpa/nfp: not in enabled drivers build config 00:08:32.110 vdpa/sfc: not in enabled drivers build config 00:08:32.110 event/*: missing internal dependency, "eventdev" 00:08:32.110 baseband/*: missing internal dependency, "bbdev" 00:08:32.110 gpu/*: missing internal dependency, "gpudev" 00:08:32.110 00:08:32.110 00:08:32.368 Build targets in project: 85 00:08:32.368 00:08:32.368 DPDK 24.03.0 00:08:32.368 00:08:32.368 User defined options 00:08:32.368 buildtype : debug 00:08:32.368 default_library : static 00:08:32.368 libdir : lib 00:08:32.368 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:32.368 c_args : -fPIC -Werror 00:08:32.368 c_link_args : 00:08:32.368 cpu_instruction_set: native 00:08:32.368 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:08:32.368 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:08:32.368 enable_docs : false 00:08:32.368 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:08:32.368 enable_kmods : false 00:08:32.368 max_lcores : 128 00:08:32.368 tests : false 00:08:32.368 00:08:32.368 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:32.941 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:08:32.941 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:32.941 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:32.941 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:32.941 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:32.941 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:32.941 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:32.941 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:32.941 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:32.941 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:32.941 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:32.941 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:32.941 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:32.941 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:32.941 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:32.941 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:32.941 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:32.941 [17/268] Linking static target lib/librte_kvargs.a 00:08:33.200 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:33.200 [19/268] Linking static target lib/librte_log.a 00:08:33.459 [20/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:33.459 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:33.459 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:33.459 [23/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:33.459 [24/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:33.459 [25/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:33.459 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:33.459 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:33.459 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:33.459 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:33.459 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:33.726 [31/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:33.726 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:33.726 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:33.726 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:33.726 [35/268] Linking static target lib/librte_ring.a 00:08:33.726 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:33.726 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:33.726 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:33.726 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:33.726 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:33.726 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:33.726 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:33.726 [43/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:33.726 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:33.726 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:33.726 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:33.726 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:33.726 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:33.726 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:33.726 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:33.726 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:33.726 [52/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:33.726 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:33.726 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:33.726 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:33.726 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:33.726 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:33.726 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:33.726 [59/268] Linking static target lib/librte_telemetry.a 00:08:33.726 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:33.726 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:33.726 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:33.726 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:33.726 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:33.726 [65/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.726 [66/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:33.726 [67/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:33.726 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:33.726 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:33.726 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:33.726 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:33.726 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:33.726 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:33.726 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:33.726 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:33.726 [76/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:33.726 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:33.726 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:33.726 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:33.726 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:33.726 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:33.726 [82/268] Linking static target lib/librte_pci.a 00:08:33.726 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:33.726 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:33.726 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:33.726 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:33.726 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:33.726 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:33.726 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:33.726 [90/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:33.992 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:33.992 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:33.992 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:33.992 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:33.992 [95/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:33.992 [96/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:33.992 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:33.992 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:33.992 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:33.992 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:33.992 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:33.992 [102/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:33.992 [103/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:33.992 [104/268] Linking static target lib/librte_eal.a 00:08:33.992 [105/268] Linking static target lib/librte_rcu.a 00:08:33.992 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:33.992 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:33.992 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:33.992 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:33.992 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:33.992 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:33.992 [112/268] Linking static target lib/librte_mempool.a 00:08:33.992 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:33.992 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:34.252 [115/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:34.252 [116/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:34.252 [117/268] Linking static target lib/librte_meter.a 00:08:34.252 [118/268] Linking static target lib/librte_mbuf.a 00:08:34.252 [119/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:34.252 [120/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.252 [121/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.252 [122/268] Linking static target lib/librte_net.a 00:08:34.252 [123/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.252 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:34.252 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:34.252 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:34.252 [127/268] Linking target lib/librte_log.so.24.1 00:08:34.252 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:34.252 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:34.512 [130/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:34.512 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:34.512 [132/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.512 [133/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.512 [134/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:34.512 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:34.512 [136/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:34.512 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:34.512 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:34.512 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.512 [140/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:08:34.512 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:34.512 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:34.512 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:34.512 [144/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:34.512 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:34.512 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:34.512 [147/268] Linking static target lib/librte_cmdline.a 00:08:34.512 [148/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:34.512 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:34.512 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:34.512 [151/268] Linking static target lib/librte_timer.a 00:08:34.512 [152/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:34.512 [153/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:34.512 [154/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:34.512 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:34.512 [156/268] Linking target lib/librte_kvargs.so.24.1 00:08:34.512 [157/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:34.512 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:34.512 [159/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:34.512 [160/268] Linking target lib/librte_telemetry.so.24.1 00:08:34.512 [161/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.512 [162/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:34.512 [163/268] Linking static target lib/librte_dmadev.a 00:08:34.512 [164/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:34.512 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:34.512 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:34.512 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:34.512 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:34.512 [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:34.772 [170/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:34.772 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:34.772 [172/268] Linking static target lib/librte_compressdev.a 00:08:34.772 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:34.772 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:34.772 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:34.772 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:34.772 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:34.772 [178/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:08:34.772 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:34.772 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:34.772 [181/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:34.772 [182/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:34.772 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:34.772 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:34.772 [185/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:08:34.772 [186/268] Linking static target lib/librte_security.a 00:08:34.772 [187/268] Linking static target lib/librte_power.a 00:08:34.772 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:34.772 [189/268] Linking static target lib/librte_reorder.a 00:08:34.772 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:34.772 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:34.772 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:34.772 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:34.772 [194/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:34.772 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:34.772 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:34.772 [197/268] Linking static target drivers/librte_bus_vdev.a 00:08:34.772 [198/268] Linking static target lib/librte_hash.a 00:08:34.772 [199/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.772 [200/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:35.031 [201/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:35.031 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:35.031 [203/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.031 [204/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.031 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:35.031 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:35.031 [207/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:35.031 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:35.031 [209/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:35.031 [210/268] Linking static target lib/librte_cryptodev.a 00:08:35.031 [211/268] Linking static target drivers/librte_bus_pci.a 00:08:35.031 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:35.031 [213/268] Linking static target lib/librte_ethdev.a 00:08:35.031 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:35.290 [215/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:35.290 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:35.290 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:35.290 [218/268] Linking static target drivers/librte_mempool_ring.a 00:08:35.290 [219/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.290 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.290 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.290 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.549 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.807 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.807 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.807 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.807 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.066 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:36.325 [229/268] Linking static target lib/librte_vhost.a 00:08:37.279 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.215 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:08:44.781 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:46.158 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:46.158 [234/268] Linking target lib/librte_eal.so.24.1 00:08:46.158 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:08:46.158 [236/268] Linking target lib/librte_meter.so.24.1 00:08:46.158 [237/268] Linking target lib/librte_ring.so.24.1 00:08:46.158 [238/268] Linking target lib/librte_timer.so.24.1 00:08:46.158 [239/268] Linking target lib/librte_pci.so.24.1 00:08:46.158 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:08:46.158 [241/268] Linking target lib/librte_dmadev.so.24.1 00:08:46.416 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:08:46.416 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:08:46.416 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:08:46.416 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:08:46.416 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:08:46.416 [247/268] Linking target lib/librte_rcu.so.24.1 00:08:46.416 [248/268] Linking target lib/librte_mempool.so.24.1 00:08:46.416 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:08:46.675 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:08:46.675 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:08:46.675 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:08:46.675 [253/268] Linking target lib/librte_mbuf.so.24.1 00:08:46.934 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:08:46.934 [255/268] Linking target lib/librte_net.so.24.1 00:08:46.934 [256/268] Linking target lib/librte_compressdev.so.24.1 00:08:46.934 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:08:46.934 [258/268] Linking target lib/librte_reorder.so.24.1 00:08:47.192 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:08:47.192 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:08:47.192 [261/268] Linking target lib/librte_security.so.24.1 00:08:47.192 [262/268] Linking target lib/librte_hash.so.24.1 00:08:47.192 [263/268] Linking target lib/librte_cmdline.so.24.1 00:08:47.192 [264/268] Linking target lib/librte_ethdev.so.24.1 00:08:47.451 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:08:47.451 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:08:47.451 [267/268] Linking target lib/librte_power.so.24.1 00:08:47.451 [268/268] Linking target lib/librte_vhost.so.24.1 00:08:47.451 INFO: autodetecting backend as ninja 00:08:47.451 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:08:48.827 CC lib/ut/ut.o 00:08:48.827 CC lib/ut_mock/mock.o 00:08:48.828 CC lib/log/log.o 00:08:48.828 CC lib/log/log_flags.o 00:08:48.828 CC lib/log/log_deprecated.o 00:08:48.828 LIB libspdk_ut.a 00:08:48.828 LIB libspdk_ut_mock.a 00:08:48.828 LIB libspdk_log.a 00:08:49.394 CC lib/dma/dma.o 00:08:49.394 CXX lib/trace_parser/trace.o 00:08:49.394 CC lib/util/cpuset.o 00:08:49.394 CC lib/util/crc32c.o 00:08:49.394 CC lib/util/base64.o 00:08:49.394 CC lib/util/crc16.o 00:08:49.394 CC lib/util/bit_array.o 00:08:49.394 CC lib/util/crc32.o 00:08:49.394 CC lib/util/crc32_ieee.o 00:08:49.394 CC lib/util/file.o 00:08:49.394 CC lib/util/crc64.o 00:08:49.394 CC lib/util/fd_group.o 00:08:49.394 CC lib/util/dif.o 00:08:49.394 CC lib/util/fd.o 00:08:49.394 CC lib/util/hexlify.o 00:08:49.394 CC lib/util/iov.o 00:08:49.394 CC lib/util/math.o 00:08:49.394 CC lib/util/net.o 00:08:49.394 CC lib/ioat/ioat.o 00:08:49.394 CC lib/util/strerror_tls.o 00:08:49.394 CC lib/util/xor.o 00:08:49.394 CC lib/util/string.o 00:08:49.394 CC lib/util/uuid.o 00:08:49.394 CC lib/util/pipe.o 00:08:49.395 CC lib/util/zipf.o 00:08:49.395 CC lib/util/md5.o 00:08:49.395 CC lib/vfio_user/host/vfio_user_pci.o 00:08:49.395 CC lib/vfio_user/host/vfio_user.o 00:08:49.395 LIB libspdk_dma.a 00:08:49.395 LIB libspdk_ioat.a 00:08:49.653 LIB libspdk_vfio_user.a 00:08:49.653 LIB libspdk_util.a 00:08:49.911 LIB libspdk_trace_parser.a 00:08:49.911 CC lib/idxd/idxd_user.o 00:08:49.911 CC lib/idxd/idxd.o 00:08:49.911 CC lib/idxd/idxd_kernel.o 00:08:49.911 CC lib/rdma_provider/common.o 00:08:49.911 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:49.911 CC lib/vmd/vmd.o 00:08:49.911 CC lib/vmd/led.o 00:08:49.911 CC lib/json/json_parse.o 00:08:49.911 CC lib/json/json_write.o 00:08:49.911 CC lib/json/json_util.o 00:08:49.911 CC lib/conf/conf.o 00:08:49.911 CC lib/rdma_utils/rdma_utils.o 00:08:49.911 CC lib/env_dpdk/env.o 00:08:49.911 CC lib/env_dpdk/init.o 00:08:49.911 CC lib/env_dpdk/memory.o 00:08:49.911 CC lib/env_dpdk/pci.o 00:08:49.911 CC lib/env_dpdk/pci_ioat.o 00:08:49.911 CC lib/env_dpdk/threads.o 00:08:49.911 CC lib/env_dpdk/pci_virtio.o 00:08:49.911 CC lib/env_dpdk/pci_vmd.o 00:08:49.911 CC lib/env_dpdk/pci_idxd.o 00:08:49.911 CC lib/env_dpdk/pci_event.o 00:08:49.911 CC lib/env_dpdk/sigbus_handler.o 00:08:49.911 CC lib/env_dpdk/pci_dpdk.o 00:08:49.911 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:49.911 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:50.170 LIB libspdk_rdma_provider.a 00:08:50.170 LIB libspdk_rdma_utils.a 00:08:50.170 LIB libspdk_conf.a 00:08:50.170 LIB libspdk_json.a 00:08:50.429 LIB libspdk_idxd.a 00:08:50.429 LIB libspdk_vmd.a 00:08:50.687 CC lib/jsonrpc/jsonrpc_server.o 00:08:50.687 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:50.687 CC lib/jsonrpc/jsonrpc_client.o 00:08:50.687 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:50.687 LIB libspdk_jsonrpc.a 00:08:50.946 LIB libspdk_env_dpdk.a 00:08:51.205 CC lib/rpc/rpc.o 00:08:51.205 LIB libspdk_rpc.a 00:08:51.773 CC lib/trace/trace_flags.o 00:08:51.773 CC lib/keyring/keyring.o 00:08:51.773 CC lib/trace/trace.o 00:08:51.773 CC lib/keyring/keyring_rpc.o 00:08:51.773 CC lib/trace/trace_rpc.o 00:08:51.773 CC lib/notify/notify_rpc.o 00:08:51.773 CC lib/notify/notify.o 00:08:51.773 LIB libspdk_notify.a 00:08:51.773 LIB libspdk_trace.a 00:08:51.773 LIB libspdk_keyring.a 00:08:52.031 CC lib/sock/sock.o 00:08:52.031 CC lib/sock/sock_rpc.o 00:08:52.031 CC lib/thread/thread.o 00:08:52.031 CC lib/thread/iobuf.o 00:08:52.599 LIB libspdk_sock.a 00:08:52.599 CC lib/nvme/nvme_fabric.o 00:08:52.599 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:52.599 CC lib/nvme/nvme_ctrlr.o 00:08:52.599 CC lib/nvme/nvme_ns_cmd.o 00:08:52.599 CC lib/nvme/nvme_ns.o 00:08:52.599 CC lib/nvme/nvme_pcie_common.o 00:08:52.599 CC lib/nvme/nvme_transport.o 00:08:52.599 CC lib/nvme/nvme_pcie.o 00:08:52.599 CC lib/nvme/nvme_qpair.o 00:08:52.599 CC lib/nvme/nvme.o 00:08:52.599 CC lib/nvme/nvme_quirks.o 00:08:52.599 CC lib/nvme/nvme_discovery.o 00:08:52.599 CC lib/nvme/nvme_io_msg.o 00:08:52.599 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:52.599 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:52.599 CC lib/nvme/nvme_opal.o 00:08:52.599 CC lib/nvme/nvme_tcp.o 00:08:52.599 CC lib/nvme/nvme_poll_group.o 00:08:52.599 CC lib/nvme/nvme_vfio_user.o 00:08:52.599 CC lib/nvme/nvme_zns.o 00:08:52.599 CC lib/nvme/nvme_stubs.o 00:08:52.599 CC lib/nvme/nvme_auth.o 00:08:52.599 CC lib/nvme/nvme_cuse.o 00:08:52.599 CC lib/nvme/nvme_rdma.o 00:08:53.168 LIB libspdk_thread.a 00:08:53.425 CC lib/blob/blobstore.o 00:08:53.425 CC lib/blob/request.o 00:08:53.425 CC lib/blob/zeroes.o 00:08:53.425 CC lib/virtio/virtio.o 00:08:53.425 CC lib/virtio/virtio_vhost_user.o 00:08:53.425 CC lib/blob/blob_bs_dev.o 00:08:53.425 CC lib/virtio/virtio_vfio_user.o 00:08:53.425 CC lib/virtio/virtio_pci.o 00:08:53.425 CC lib/accel/accel.o 00:08:53.425 CC lib/accel/accel_rpc.o 00:08:53.425 CC lib/accel/accel_sw.o 00:08:53.425 CC lib/vfu_tgt/tgt_endpoint.o 00:08:53.425 CC lib/vfu_tgt/tgt_rpc.o 00:08:53.425 CC lib/init/json_config.o 00:08:53.425 CC lib/init/subsystem_rpc.o 00:08:53.425 CC lib/init/subsystem.o 00:08:53.425 CC lib/init/rpc.o 00:08:53.425 CC lib/fsdev/fsdev_io.o 00:08:53.425 CC lib/fsdev/fsdev.o 00:08:53.425 CC lib/fsdev/fsdev_rpc.o 00:08:53.682 LIB libspdk_init.a 00:08:53.682 LIB libspdk_virtio.a 00:08:53.682 LIB libspdk_vfu_tgt.a 00:08:53.941 LIB libspdk_fsdev.a 00:08:53.941 CC lib/event/app.o 00:08:53.941 CC lib/event/log_rpc.o 00:08:53.941 CC lib/event/reactor.o 00:08:53.941 CC lib/event/app_rpc.o 00:08:53.941 CC lib/event/scheduler_static.o 00:08:54.199 LIB libspdk_nvme.a 00:08:54.199 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:54.458 LIB libspdk_event.a 00:08:54.458 LIB libspdk_accel.a 00:08:54.716 CC lib/bdev/bdev.o 00:08:54.716 CC lib/bdev/bdev_rpc.o 00:08:54.716 CC lib/bdev/bdev_zone.o 00:08:54.717 CC lib/bdev/part.o 00:08:54.717 CC lib/bdev/scsi_nvme.o 00:08:54.976 LIB libspdk_fuse_dispatcher.a 00:08:55.914 LIB libspdk_blob.a 00:08:56.173 CC lib/lvol/lvol.o 00:08:56.173 CC lib/blobfs/blobfs.o 00:08:56.173 CC lib/blobfs/tree.o 00:08:57.106 LIB libspdk_lvol.a 00:08:57.106 LIB libspdk_blobfs.a 00:08:57.364 LIB libspdk_bdev.a 00:08:57.627 CC lib/ublk/ublk.o 00:08:57.627 CC lib/ublk/ublk_rpc.o 00:08:57.627 CC lib/ftl/ftl_io.o 00:08:57.627 CC lib/ftl/ftl_core.o 00:08:57.627 CC lib/ftl/ftl_init.o 00:08:57.627 CC lib/ftl/ftl_layout.o 00:08:57.627 CC lib/ftl/ftl_debug.o 00:08:57.627 CC lib/ftl/ftl_sb.o 00:08:57.627 CC lib/ftl/ftl_l2p_flat.o 00:08:57.627 CC lib/ftl/ftl_l2p.o 00:08:57.627 CC lib/ftl/ftl_nv_cache.o 00:08:57.627 CC lib/ftl/ftl_band_ops.o 00:08:57.627 CC lib/nbd/nbd.o 00:08:57.627 CC lib/ftl/ftl_band.o 00:08:57.627 CC lib/ftl/ftl_writer.o 00:08:57.627 CC lib/ftl/ftl_rq.o 00:08:57.627 CC lib/ftl/ftl_reloc.o 00:08:57.627 CC lib/nbd/nbd_rpc.o 00:08:57.627 CC lib/ftl/ftl_l2p_cache.o 00:08:57.627 CC lib/ftl/ftl_p2l.o 00:08:57.627 CC lib/ftl/ftl_p2l_log.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:57.627 CC lib/scsi/dev.o 00:08:57.627 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:57.627 CC lib/ftl/utils/ftl_conf.o 00:08:57.627 CC lib/scsi/lun.o 00:08:57.627 CC lib/ftl/utils/ftl_md.o 00:08:57.627 CC lib/ftl/utils/ftl_mempool.o 00:08:57.627 CC lib/scsi/port.o 00:08:57.627 CC lib/ftl/utils/ftl_bitmap.o 00:08:57.627 CC lib/scsi/scsi.o 00:08:57.627 CC lib/scsi/scsi_pr.o 00:08:57.627 CC lib/scsi/scsi_rpc.o 00:08:57.627 CC lib/scsi/task.o 00:08:57.627 CC lib/scsi/scsi_bdev.o 00:08:57.627 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:57.627 CC lib/ftl/utils/ftl_property.o 00:08:57.627 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:57.627 CC lib/nvmf/ctrlr.o 00:08:57.627 CC lib/nvmf/ctrlr_discovery.o 00:08:57.627 CC lib/nvmf/ctrlr_bdev.o 00:08:57.627 CC lib/nvmf/subsystem.o 00:08:57.627 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:57.627 CC lib/nvmf/nvmf_rpc.o 00:08:57.627 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:57.627 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:57.627 CC lib/nvmf/nvmf.o 00:08:57.627 CC lib/nvmf/transport.o 00:08:57.627 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:57.627 CC lib/nvmf/tcp.o 00:08:57.627 CC lib/nvmf/stubs.o 00:08:57.627 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:57.627 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:57.627 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:57.627 CC lib/nvmf/vfio_user.o 00:08:57.627 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:57.627 CC lib/nvmf/mdns_server.o 00:08:57.627 CC lib/nvmf/rdma.o 00:08:57.627 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:57.627 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:57.627 CC lib/nvmf/auth.o 00:08:57.886 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:57.886 CC lib/ftl/base/ftl_base_dev.o 00:08:57.886 CC lib/ftl/base/ftl_base_bdev.o 00:08:57.886 CC lib/ftl/ftl_trace.o 00:08:58.145 LIB libspdk_scsi.a 00:08:58.145 LIB libspdk_nbd.a 00:08:58.403 LIB libspdk_ublk.a 00:08:58.403 CC lib/vhost/vhost_rpc.o 00:08:58.403 CC lib/vhost/vhost.o 00:08:58.403 CC lib/vhost/vhost_scsi.o 00:08:58.403 CC lib/vhost/vhost_blk.o 00:08:58.403 CC lib/vhost/rte_vhost_user.o 00:08:58.403 CC lib/iscsi/conn.o 00:08:58.403 CC lib/iscsi/init_grp.o 00:08:58.403 CC lib/iscsi/iscsi.o 00:08:58.403 CC lib/iscsi/param.o 00:08:58.403 CC lib/iscsi/portal_grp.o 00:08:58.403 CC lib/iscsi/tgt_node.o 00:08:58.403 CC lib/iscsi/iscsi_rpc.o 00:08:58.403 CC lib/iscsi/task.o 00:08:58.403 CC lib/iscsi/iscsi_subsystem.o 00:08:58.661 LIB libspdk_ftl.a 00:08:59.596 LIB libspdk_vhost.a 00:08:59.596 LIB libspdk_nvmf.a 00:08:59.596 LIB libspdk_iscsi.a 00:09:00.162 CC module/env_dpdk/env_dpdk_rpc.o 00:09:00.162 CC module/vfu_device/vfu_virtio.o 00:09:00.162 CC module/vfu_device/vfu_virtio_scsi.o 00:09:00.162 CC module/vfu_device/vfu_virtio_blk.o 00:09:00.162 CC module/vfu_device/vfu_virtio_fs.o 00:09:00.162 CC module/vfu_device/vfu_virtio_rpc.o 00:09:00.162 CC module/blob/bdev/blob_bdev.o 00:09:00.162 CC module/scheduler/gscheduler/gscheduler.o 00:09:00.162 CC module/sock/posix/posix.o 00:09:00.162 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:00.162 CC module/accel/ioat/accel_ioat_rpc.o 00:09:00.162 CC module/accel/ioat/accel_ioat.o 00:09:00.162 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:00.162 CC module/fsdev/aio/fsdev_aio.o 00:09:00.162 CC module/fsdev/aio/fsdev_aio_rpc.o 00:09:00.162 CC module/fsdev/aio/linux_aio_mgr.o 00:09:00.162 CC module/keyring/linux/keyring_rpc.o 00:09:00.162 CC module/accel/error/accel_error.o 00:09:00.162 CC module/keyring/linux/keyring.o 00:09:00.162 LIB libspdk_env_dpdk_rpc.a 00:09:00.162 CC module/accel/iaa/accel_iaa_rpc.o 00:09:00.162 CC module/accel/iaa/accel_iaa.o 00:09:00.162 CC module/accel/error/accel_error_rpc.o 00:09:00.162 CC module/accel/dsa/accel_dsa.o 00:09:00.162 CC module/accel/dsa/accel_dsa_rpc.o 00:09:00.162 CC module/keyring/file/keyring_rpc.o 00:09:00.162 CC module/keyring/file/keyring.o 00:09:00.162 LIB libspdk_scheduler_gscheduler.a 00:09:00.420 LIB libspdk_scheduler_dpdk_governor.a 00:09:00.420 LIB libspdk_keyring_linux.a 00:09:00.420 LIB libspdk_keyring_file.a 00:09:00.420 LIB libspdk_accel_error.a 00:09:00.420 LIB libspdk_blob_bdev.a 00:09:00.420 LIB libspdk_scheduler_dynamic.a 00:09:00.420 LIB libspdk_accel_ioat.a 00:09:00.420 LIB libspdk_accel_iaa.a 00:09:00.420 LIB libspdk_vfu_device.a 00:09:00.420 LIB libspdk_accel_dsa.a 00:09:00.679 CC module/bdev/null/bdev_null.o 00:09:00.679 CC module/bdev/null/bdev_null_rpc.o 00:09:00.679 CC module/bdev/lvol/vbdev_lvol.o 00:09:00.679 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:00.679 CC module/bdev/split/vbdev_split.o 00:09:00.679 CC module/bdev/split/vbdev_split_rpc.o 00:09:00.679 CC module/bdev/delay/vbdev_delay.o 00:09:00.679 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:00.679 CC module/bdev/raid/bdev_raid_rpc.o 00:09:00.679 CC module/bdev/raid/bdev_raid_sb.o 00:09:00.679 CC module/bdev/raid/bdev_raid.o 00:09:00.679 CC module/bdev/raid/raid0.o 00:09:00.679 CC module/bdev/raid/raid1.o 00:09:00.679 CC module/bdev/raid/concat.o 00:09:00.679 CC module/bdev/error/vbdev_error.o 00:09:00.679 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:00.679 CC module/bdev/error/vbdev_error_rpc.o 00:09:00.679 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:00.679 CC module/bdev/aio/bdev_aio_rpc.o 00:09:00.679 CC module/bdev/iscsi/bdev_iscsi.o 00:09:00.679 CC module/bdev/aio/bdev_aio.o 00:09:00.679 CC module/bdev/malloc/bdev_malloc.o 00:09:00.679 CC module/bdev/gpt/gpt.o 00:09:00.679 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:00.679 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:00.679 CC module/bdev/gpt/vbdev_gpt.o 00:09:00.679 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:00.679 CC module/bdev/ftl/bdev_ftl.o 00:09:00.679 CC module/bdev/nvme/bdev_nvme.o 00:09:00.679 CC module/blobfs/bdev/blobfs_bdev.o 00:09:00.679 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:00.679 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:00.679 CC module/bdev/nvme/nvme_rpc.o 00:09:00.679 CC module/bdev/nvme/bdev_mdns_client.o 00:09:00.679 CC module/bdev/nvme/vbdev_opal.o 00:09:00.679 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:00.679 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:00.679 CC module/bdev/passthru/vbdev_passthru.o 00:09:00.679 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:00.679 LIB libspdk_sock_posix.a 00:09:00.679 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:00.679 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:00.679 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:00.679 LIB libspdk_fsdev_aio.a 00:09:00.937 LIB libspdk_bdev_error.a 00:09:00.937 LIB libspdk_bdev_null.a 00:09:00.937 LIB libspdk_blobfs_bdev.a 00:09:00.937 LIB libspdk_bdev_gpt.a 00:09:00.937 LIB libspdk_bdev_ftl.a 00:09:00.937 LIB libspdk_bdev_aio.a 00:09:00.937 LIB libspdk_bdev_zone_block.a 00:09:00.937 LIB libspdk_bdev_split.a 00:09:00.937 LIB libspdk_bdev_passthru.a 00:09:01.196 LIB libspdk_bdev_delay.a 00:09:01.196 LIB libspdk_bdev_malloc.a 00:09:01.196 LIB libspdk_bdev_lvol.a 00:09:01.196 LIB libspdk_bdev_iscsi.a 00:09:01.196 LIB libspdk_bdev_virtio.a 00:09:01.764 LIB libspdk_bdev_raid.a 00:09:03.143 LIB libspdk_bdev_nvme.a 00:09:03.402 CC module/event/subsystems/keyring/keyring.o 00:09:03.402 CC module/event/subsystems/sock/sock.o 00:09:03.402 CC module/event/subsystems/scheduler/scheduler.o 00:09:03.402 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:03.402 CC module/event/subsystems/vmd/vmd.o 00:09:03.402 CC module/event/subsystems/iobuf/iobuf.o 00:09:03.402 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:03.402 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:09:03.402 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:03.402 CC module/event/subsystems/fsdev/fsdev.o 00:09:03.661 LIB libspdk_event_scheduler.a 00:09:03.661 LIB libspdk_event_keyring.a 00:09:03.661 LIB libspdk_event_sock.a 00:09:03.661 LIB libspdk_event_vfu_tgt.a 00:09:03.661 LIB libspdk_event_vmd.a 00:09:03.661 LIB libspdk_event_vhost_blk.a 00:09:03.661 LIB libspdk_event_iobuf.a 00:09:03.661 LIB libspdk_event_fsdev.a 00:09:03.920 CC module/event/subsystems/accel/accel.o 00:09:03.920 LIB libspdk_event_accel.a 00:09:04.488 CC module/event/subsystems/bdev/bdev.o 00:09:04.488 LIB libspdk_event_bdev.a 00:09:04.747 CC module/event/subsystems/nbd/nbd.o 00:09:04.747 CC module/event/subsystems/ublk/ublk.o 00:09:04.747 CC module/event/subsystems/scsi/scsi.o 00:09:04.747 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:04.747 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:04.747 LIB libspdk_event_nbd.a 00:09:05.006 LIB libspdk_event_ublk.a 00:09:05.006 LIB libspdk_event_scsi.a 00:09:05.006 LIB libspdk_event_nvmf.a 00:09:05.263 CC module/event/subsystems/iscsi/iscsi.o 00:09:05.263 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:05.263 LIB libspdk_event_iscsi.a 00:09:05.520 LIB libspdk_event_vhost_scsi.a 00:09:05.787 CC test/rpc_client/rpc_client_test.o 00:09:05.787 TEST_HEADER include/spdk/accel_module.h 00:09:05.787 TEST_HEADER include/spdk/accel.h 00:09:05.787 TEST_HEADER include/spdk/assert.h 00:09:05.787 TEST_HEADER include/spdk/barrier.h 00:09:05.787 TEST_HEADER include/spdk/bdev.h 00:09:05.787 TEST_HEADER include/spdk/bdev_module.h 00:09:05.787 TEST_HEADER include/spdk/base64.h 00:09:05.787 TEST_HEADER include/spdk/bdev_zone.h 00:09:05.787 TEST_HEADER include/spdk/blob_bdev.h 00:09:05.787 TEST_HEADER include/spdk/bit_array.h 00:09:05.787 TEST_HEADER include/spdk/bit_pool.h 00:09:05.787 TEST_HEADER include/spdk/blobfs.h 00:09:05.787 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:05.787 CXX app/trace/trace.o 00:09:05.787 TEST_HEADER include/spdk/cpuset.h 00:09:05.787 TEST_HEADER include/spdk/conf.h 00:09:05.787 TEST_HEADER include/spdk/blob.h 00:09:05.787 TEST_HEADER include/spdk/config.h 00:09:05.787 TEST_HEADER include/spdk/crc16.h 00:09:05.787 TEST_HEADER include/spdk/crc32.h 00:09:05.787 CC app/spdk_nvme_identify/identify.o 00:09:05.787 TEST_HEADER include/spdk/crc64.h 00:09:05.787 TEST_HEADER include/spdk/dma.h 00:09:05.787 TEST_HEADER include/spdk/endian.h 00:09:05.787 TEST_HEADER include/spdk/dif.h 00:09:05.787 TEST_HEADER include/spdk/env_dpdk.h 00:09:05.787 TEST_HEADER include/spdk/event.h 00:09:05.787 CC app/trace_record/trace_record.o 00:09:05.787 TEST_HEADER include/spdk/fd.h 00:09:05.787 TEST_HEADER include/spdk/fd_group.h 00:09:05.787 TEST_HEADER include/spdk/env.h 00:09:05.787 TEST_HEADER include/spdk/file.h 00:09:05.787 CC app/spdk_nvme_discover/discovery_aer.o 00:09:05.787 TEST_HEADER include/spdk/fsdev.h 00:09:05.787 CC app/spdk_lspci/spdk_lspci.o 00:09:05.787 TEST_HEADER include/spdk/ftl.h 00:09:05.787 TEST_HEADER include/spdk/fuse_dispatcher.h 00:09:05.787 TEST_HEADER include/spdk/fsdev_module.h 00:09:05.787 TEST_HEADER include/spdk/gpt_spec.h 00:09:05.787 TEST_HEADER include/spdk/histogram_data.h 00:09:05.787 TEST_HEADER include/spdk/idxd.h 00:09:05.787 TEST_HEADER include/spdk/idxd_spec.h 00:09:05.787 TEST_HEADER include/spdk/hexlify.h 00:09:05.787 CC app/spdk_top/spdk_top.o 00:09:05.787 TEST_HEADER include/spdk/init.h 00:09:05.787 CC app/spdk_nvme_perf/perf.o 00:09:05.787 TEST_HEADER include/spdk/ioat.h 00:09:05.787 TEST_HEADER include/spdk/ioat_spec.h 00:09:05.787 TEST_HEADER include/spdk/jsonrpc.h 00:09:05.787 TEST_HEADER include/spdk/iscsi_spec.h 00:09:05.787 TEST_HEADER include/spdk/json.h 00:09:05.787 TEST_HEADER include/spdk/keyring.h 00:09:05.787 TEST_HEADER include/spdk/keyring_module.h 00:09:05.787 TEST_HEADER include/spdk/likely.h 00:09:05.787 TEST_HEADER include/spdk/lvol.h 00:09:05.787 TEST_HEADER include/spdk/log.h 00:09:05.787 TEST_HEADER include/spdk/memory.h 00:09:05.787 TEST_HEADER include/spdk/mmio.h 00:09:05.787 TEST_HEADER include/spdk/md5.h 00:09:05.787 TEST_HEADER include/spdk/nbd.h 00:09:05.787 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:05.787 TEST_HEADER include/spdk/net.h 00:09:05.787 TEST_HEADER include/spdk/notify.h 00:09:05.787 TEST_HEADER include/spdk/nvme.h 00:09:05.787 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:05.787 TEST_HEADER include/spdk/nvme_intel.h 00:09:05.787 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:05.787 TEST_HEADER include/spdk/nvme_zns.h 00:09:05.787 TEST_HEADER include/spdk/nvme_spec.h 00:09:05.787 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:05.787 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:05.787 TEST_HEADER include/spdk/nvmf.h 00:09:05.787 TEST_HEADER include/spdk/nvmf_spec.h 00:09:05.787 TEST_HEADER include/spdk/nvmf_transport.h 00:09:05.787 TEST_HEADER include/spdk/opal_spec.h 00:09:05.787 TEST_HEADER include/spdk/opal.h 00:09:05.787 TEST_HEADER include/spdk/pci_ids.h 00:09:05.787 TEST_HEADER include/spdk/pipe.h 00:09:05.787 TEST_HEADER include/spdk/queue.h 00:09:05.787 TEST_HEADER include/spdk/reduce.h 00:09:05.787 TEST_HEADER include/spdk/rpc.h 00:09:05.787 TEST_HEADER include/spdk/scheduler.h 00:09:05.787 TEST_HEADER include/spdk/scsi.h 00:09:05.787 TEST_HEADER include/spdk/sock.h 00:09:05.788 TEST_HEADER include/spdk/stdinc.h 00:09:05.788 TEST_HEADER include/spdk/string.h 00:09:05.788 TEST_HEADER include/spdk/scsi_spec.h 00:09:05.788 TEST_HEADER include/spdk/thread.h 00:09:05.788 TEST_HEADER include/spdk/tree.h 00:09:05.788 TEST_HEADER include/spdk/trace_parser.h 00:09:05.788 TEST_HEADER include/spdk/trace.h 00:09:05.788 TEST_HEADER include/spdk/ublk.h 00:09:05.788 TEST_HEADER include/spdk/util.h 00:09:05.788 TEST_HEADER include/spdk/uuid.h 00:09:05.788 TEST_HEADER include/spdk/version.h 00:09:05.788 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:05.788 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:05.788 TEST_HEADER include/spdk/vhost.h 00:09:05.788 TEST_HEADER include/spdk/vmd.h 00:09:05.788 TEST_HEADER include/spdk/xor.h 00:09:05.788 CC app/spdk_dd/spdk_dd.o 00:09:05.788 TEST_HEADER include/spdk/zipf.h 00:09:05.788 CXX test/cpp_headers/accel.o 00:09:05.788 CXX test/cpp_headers/accel_module.o 00:09:05.788 CC app/nvmf_tgt/nvmf_main.o 00:09:05.788 CXX test/cpp_headers/assert.o 00:09:05.788 CXX test/cpp_headers/barrier.o 00:09:05.788 CXX test/cpp_headers/base64.o 00:09:05.788 CXX test/cpp_headers/bdev.o 00:09:05.788 CXX test/cpp_headers/bdev_module.o 00:09:05.788 CXX test/cpp_headers/bdev_zone.o 00:09:05.788 CXX test/cpp_headers/bit_pool.o 00:09:05.788 CXX test/cpp_headers/blob_bdev.o 00:09:05.788 CXX test/cpp_headers/bit_array.o 00:09:05.788 CXX test/cpp_headers/blobfs_bdev.o 00:09:05.788 CXX test/cpp_headers/blobfs.o 00:09:05.788 CXX test/cpp_headers/blob.o 00:09:05.788 CXX test/cpp_headers/config.o 00:09:05.788 CXX test/cpp_headers/conf.o 00:09:05.788 CXX test/cpp_headers/cpuset.o 00:09:05.788 CXX test/cpp_headers/crc16.o 00:09:05.788 CXX test/cpp_headers/crc32.o 00:09:05.788 CXX test/cpp_headers/crc64.o 00:09:05.788 CXX test/cpp_headers/dif.o 00:09:05.788 CXX test/cpp_headers/endian.o 00:09:05.788 CXX test/cpp_headers/dma.o 00:09:05.788 CXX test/cpp_headers/env_dpdk.o 00:09:05.788 CXX test/cpp_headers/env.o 00:09:05.788 CXX test/cpp_headers/event.o 00:09:05.788 CXX test/cpp_headers/fd_group.o 00:09:05.788 CXX test/cpp_headers/fd.o 00:09:05.788 CXX test/cpp_headers/fsdev.o 00:09:05.788 CXX test/cpp_headers/file.o 00:09:05.788 CXX test/cpp_headers/fsdev_module.o 00:09:05.788 CXX test/cpp_headers/ftl.o 00:09:05.788 CXX test/cpp_headers/fuse_dispatcher.o 00:09:05.788 CXX test/cpp_headers/gpt_spec.o 00:09:05.788 CXX test/cpp_headers/hexlify.o 00:09:05.788 CXX test/cpp_headers/histogram_data.o 00:09:05.788 CXX test/cpp_headers/idxd.o 00:09:05.788 CXX test/cpp_headers/idxd_spec.o 00:09:05.788 CXX test/cpp_headers/init.o 00:09:05.788 CXX test/cpp_headers/ioat.o 00:09:05.788 CXX test/cpp_headers/ioat_spec.o 00:09:05.788 CC app/spdk_tgt/spdk_tgt.o 00:09:05.788 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:05.788 CC app/iscsi_tgt/iscsi_tgt.o 00:09:05.788 CC test/env/pci/pci_ut.o 00:09:05.788 CC test/thread/lock/spdk_lock.o 00:09:05.788 CC test/env/vtophys/vtophys.o 00:09:05.788 CC examples/util/zipf/zipf.o 00:09:05.788 CC examples/ioat/verify/verify.o 00:09:05.788 CC test/thread/poller_perf/poller_perf.o 00:09:05.788 CC examples/ioat/perf/perf.o 00:09:05.788 CXX test/cpp_headers/iscsi_spec.o 00:09:05.788 CC test/env/memory/memory_ut.o 00:09:05.788 CC test/app/stub/stub.o 00:09:05.788 CC test/app/histogram_perf/histogram_perf.o 00:09:05.788 CC app/fio/nvme/fio_plugin.o 00:09:05.788 CC test/app/jsoncat/jsoncat.o 00:09:05.788 LINK rpc_client_test 00:09:05.788 CC test/dma/test_dma/test_dma.o 00:09:05.788 CC test/env/mem_callbacks/mem_callbacks.o 00:09:06.046 CC app/fio/bdev/fio_plugin.o 00:09:06.047 LINK spdk_lspci 00:09:06.047 CC test/app/bdev_svc/bdev_svc.o 00:09:06.047 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:06.047 CXX test/cpp_headers/json.o 00:09:06.047 CXX test/cpp_headers/jsonrpc.o 00:09:06.047 LINK interrupt_tgt 00:09:06.047 CXX test/cpp_headers/keyring.o 00:09:06.047 CXX test/cpp_headers/keyring_module.o 00:09:06.047 LINK spdk_nvme_discover 00:09:06.047 LINK nvmf_tgt 00:09:06.047 CXX test/cpp_headers/likely.o 00:09:06.047 CXX test/cpp_headers/lvol.o 00:09:06.047 CXX test/cpp_headers/log.o 00:09:06.047 CXX test/cpp_headers/md5.o 00:09:06.047 CXX test/cpp_headers/memory.o 00:09:06.047 CXX test/cpp_headers/mmio.o 00:09:06.047 CXX test/cpp_headers/nbd.o 00:09:06.047 CXX test/cpp_headers/net.o 00:09:06.047 CXX test/cpp_headers/notify.o 00:09:06.047 CXX test/cpp_headers/nvme.o 00:09:06.047 LINK poller_perf 00:09:06.047 LINK vtophys 00:09:06.047 CXX test/cpp_headers/nvme_intel.o 00:09:06.047 CXX test/cpp_headers/nvme_ocssd.o 00:09:06.047 LINK zipf 00:09:06.047 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:06.047 CXX test/cpp_headers/nvme_spec.o 00:09:06.047 CXX test/cpp_headers/nvme_zns.o 00:09:06.047 CXX test/cpp_headers/nvmf_cmd.o 00:09:06.047 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:06.047 CXX test/cpp_headers/nvmf.o 00:09:06.047 CXX test/cpp_headers/nvmf_spec.o 00:09:06.047 CXX test/cpp_headers/nvmf_transport.o 00:09:06.047 CXX test/cpp_headers/opal.o 00:09:06.047 LINK env_dpdk_post_init 00:09:06.047 CXX test/cpp_headers/opal_spec.o 00:09:06.047 CXX test/cpp_headers/pci_ids.o 00:09:06.047 CXX test/cpp_headers/pipe.o 00:09:06.047 CXX test/cpp_headers/queue.o 00:09:06.047 LINK stub 00:09:06.047 CXX test/cpp_headers/reduce.o 00:09:06.047 CXX test/cpp_headers/rpc.o 00:09:06.047 CXX test/cpp_headers/scheduler.o 00:09:06.047 CXX test/cpp_headers/scsi.o 00:09:06.047 CXX test/cpp_headers/scsi_spec.o 00:09:06.047 LINK ioat_perf 00:09:06.047 LINK jsoncat 00:09:06.047 LINK histogram_perf 00:09:06.047 LINK spdk_trace_record 00:09:06.047 CXX test/cpp_headers/sock.o 00:09:06.047 CXX test/cpp_headers/stdinc.o 00:09:06.047 CXX test/cpp_headers/string.o 00:09:06.047 CXX test/cpp_headers/thread.o 00:09:06.047 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:06.047 CXX test/cpp_headers/trace.o 00:09:06.047 LINK verify 00:09:06.307 LINK spdk_tgt 00:09:06.307 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:06.307 LINK iscsi_tgt 00:09:06.307 CXX test/cpp_headers/trace_parser.o 00:09:06.307 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:06.307 CXX test/cpp_headers/tree.o 00:09:06.307 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:09:06.307 CXX test/cpp_headers/ublk.o 00:09:06.307 CXX test/cpp_headers/util.o 00:09:06.307 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:09:06.307 LINK bdev_svc 00:09:06.307 CXX test/cpp_headers/uuid.o 00:09:06.307 CXX test/cpp_headers/version.o 00:09:06.307 CXX test/cpp_headers/vfio_user_pci.o 00:09:06.307 CXX test/cpp_headers/vfio_user_spec.o 00:09:06.307 CXX test/cpp_headers/vhost.o 00:09:06.307 CXX test/cpp_headers/vmd.o 00:09:06.307 CXX test/cpp_headers/xor.o 00:09:06.307 CXX test/cpp_headers/zipf.o 00:09:06.307 LINK spdk_trace 00:09:06.565 LINK spdk_dd 00:09:06.565 LINK pci_ut 00:09:06.565 LINK nvme_fuzz 00:09:06.565 LINK test_dma 00:09:06.565 LINK spdk_nvme_identify 00:09:06.565 LINK spdk_nvme 00:09:06.565 LINK spdk_nvme_perf 00:09:06.565 LINK llvm_vfio_fuzz 00:09:06.565 LINK mem_callbacks 00:09:06.565 LINK spdk_bdev 00:09:06.565 LINK vhost_fuzz 00:09:06.823 CC examples/vmd/led/led.o 00:09:06.823 CC examples/sock/hello_world/hello_sock.o 00:09:06.823 CC examples/idxd/perf/perf.o 00:09:06.823 CC examples/vmd/lsvmd/lsvmd.o 00:09:06.823 LINK spdk_top 00:09:06.823 CC examples/thread/thread/thread_ex.o 00:09:06.823 CC app/vhost/vhost.o 00:09:06.823 LINK led 00:09:07.081 LINK lsvmd 00:09:07.081 LINK hello_sock 00:09:07.081 LINK vhost 00:09:07.081 LINK idxd_perf 00:09:07.081 LINK thread 00:09:07.081 LINK llvm_nvme_fuzz 00:09:07.081 LINK memory_ut 00:09:07.340 LINK spdk_lock 00:09:07.598 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:07.598 CC examples/nvme/hotplug/hotplug.o 00:09:07.598 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:07.598 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:07.598 CC examples/nvme/abort/abort.o 00:09:07.598 CC examples/nvme/arbitration/arbitration.o 00:09:07.598 CC examples/nvme/reconnect/reconnect.o 00:09:07.598 CC examples/nvme/hello_world/hello_world.o 00:09:07.857 LINK iscsi_fuzz 00:09:07.857 LINK cmb_copy 00:09:07.857 LINK pmr_persistence 00:09:07.857 LINK hello_world 00:09:07.857 LINK hotplug 00:09:07.857 LINK reconnect 00:09:08.115 LINK arbitration 00:09:08.115 LINK abort 00:09:08.115 LINK nvme_manage 00:09:08.115 CC test/event/reactor/reactor.o 00:09:08.115 CC test/event/event_perf/event_perf.o 00:09:08.115 CC test/event/reactor_perf/reactor_perf.o 00:09:08.115 CC test/event/app_repeat/app_repeat.o 00:09:08.115 CC test/event/scheduler/scheduler.o 00:09:08.373 LINK reactor 00:09:08.373 LINK app_repeat 00:09:08.373 LINK event_perf 00:09:08.373 LINK reactor_perf 00:09:08.373 LINK scheduler 00:09:08.669 CC test/nvme/aer/aer.o 00:09:08.669 CC test/nvme/connect_stress/connect_stress.o 00:09:08.669 CC test/nvme/compliance/nvme_compliance.o 00:09:08.669 CC test/nvme/reserve/reserve.o 00:09:08.669 CC test/nvme/reset/reset.o 00:09:08.669 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:08.669 CC test/nvme/simple_copy/simple_copy.o 00:09:08.669 CC test/nvme/startup/startup.o 00:09:08.669 CC test/nvme/fused_ordering/fused_ordering.o 00:09:08.669 CC test/nvme/err_injection/err_injection.o 00:09:08.669 CC test/nvme/e2edp/nvme_dp.o 00:09:08.669 CC test/nvme/cuse/cuse.o 00:09:08.669 CC test/nvme/sgl/sgl.o 00:09:08.669 CC test/nvme/fdp/fdp.o 00:09:08.669 CC test/nvme/boot_partition/boot_partition.o 00:09:08.669 CC test/nvme/overhead/overhead.o 00:09:08.669 CC test/blobfs/mkfs/mkfs.o 00:09:08.669 CC test/accel/dif/dif.o 00:09:08.669 CC test/lvol/esnap/esnap.o 00:09:08.669 LINK doorbell_aers 00:09:08.669 LINK connect_stress 00:09:08.669 LINK boot_partition 00:09:08.669 LINK startup 00:09:08.669 LINK err_injection 00:09:08.669 LINK reserve 00:09:08.669 LINK mkfs 00:09:08.669 LINK fused_ordering 00:09:08.945 LINK fdp 00:09:08.945 LINK sgl 00:09:08.945 LINK simple_copy 00:09:08.945 LINK overhead 00:09:08.945 LINK aer 00:09:08.945 LINK nvme_dp 00:09:08.945 LINK reset 00:09:08.945 LINK nvme_compliance 00:09:09.209 LINK dif 00:09:09.209 CC examples/accel/perf/accel_perf.o 00:09:09.209 CC examples/blob/cli/blobcli.o 00:09:09.209 CC examples/blob/hello_world/hello_blob.o 00:09:09.209 CC examples/fsdev/hello_world/hello_fsdev.o 00:09:09.493 LINK hello_blob 00:09:09.493 LINK hello_fsdev 00:09:09.493 LINK accel_perf 00:09:09.753 LINK blobcli 00:09:09.753 LINK cuse 00:09:10.687 CC examples/bdev/bdevperf/bdevperf.o 00:09:10.687 CC examples/bdev/hello_world/hello_bdev.o 00:09:10.687 LINK hello_bdev 00:09:11.253 LINK bdevperf 00:09:11.253 CC test/bdev/bdevio/bdevio.o 00:09:11.512 LINK bdevio 00:09:13.413 CC examples/nvmf/nvmf/nvmf.o 00:09:13.413 LINK nvmf 00:09:13.413 LINK esnap 00:09:15.313 00:09:15.313 real 0m53.089s 00:09:15.313 user 8m11.863s 00:09:15.313 sys 2m37.513s 00:09:15.313 16:33:19 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:09:15.313 16:33:19 make -- common/autotest_common.sh@10 -- $ set +x 00:09:15.313 ************************************ 00:09:15.313 END TEST make 00:09:15.313 ************************************ 00:09:15.313 16:33:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:09:15.313 16:33:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:09:15.313 16:33:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:09:15.313 16:33:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.313 16:33:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:09:15.313 16:33:19 -- pm/common@44 -- $ pid=3393563 00:09:15.313 16:33:19 -- pm/common@50 -- $ kill -TERM 3393563 00:09:15.313 16:33:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.313 16:33:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:09:15.313 16:33:19 -- pm/common@44 -- $ pid=3393565 00:09:15.313 16:33:19 -- pm/common@50 -- $ kill -TERM 3393565 00:09:15.313 16:33:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.313 16:33:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:09:15.313 16:33:19 -- pm/common@44 -- $ pid=3393567 00:09:15.314 16:33:19 -- pm/common@50 -- $ kill -TERM 3393567 00:09:15.314 16:33:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.314 16:33:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:09:15.314 16:33:19 -- pm/common@44 -- $ pid=3393593 00:09:15.314 16:33:19 -- pm/common@50 -- $ sudo -E kill -TERM 3393593 00:09:15.314 16:33:19 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:09:15.314 16:33:19 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:09:15.314 16:33:19 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:15.314 16:33:19 -- common/autotest_common.sh@1691 -- # lcov --version 00:09:15.314 16:33:19 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:15.314 16:33:19 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:15.314 16:33:19 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.314 16:33:19 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.314 16:33:19 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.314 16:33:19 -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.314 16:33:19 -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.314 16:33:19 -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.314 16:33:19 -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.314 16:33:19 -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.314 16:33:19 -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.314 16:33:19 -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.314 16:33:19 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.314 16:33:19 -- scripts/common.sh@344 -- # case "$op" in 00:09:15.314 16:33:19 -- scripts/common.sh@345 -- # : 1 00:09:15.314 16:33:19 -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.314 16:33:19 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.314 16:33:19 -- scripts/common.sh@365 -- # decimal 1 00:09:15.314 16:33:19 -- scripts/common.sh@353 -- # local d=1 00:09:15.314 16:33:19 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.314 16:33:19 -- scripts/common.sh@355 -- # echo 1 00:09:15.314 16:33:19 -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.314 16:33:19 -- scripts/common.sh@366 -- # decimal 2 00:09:15.314 16:33:19 -- scripts/common.sh@353 -- # local d=2 00:09:15.314 16:33:19 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.314 16:33:19 -- scripts/common.sh@355 -- # echo 2 00:09:15.314 16:33:19 -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.314 16:33:19 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.314 16:33:19 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.314 16:33:19 -- scripts/common.sh@368 -- # return 0 00:09:15.314 16:33:19 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.314 16:33:19 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.314 --rc genhtml_branch_coverage=1 00:09:15.314 --rc genhtml_function_coverage=1 00:09:15.314 --rc genhtml_legend=1 00:09:15.314 --rc geninfo_all_blocks=1 00:09:15.314 --rc geninfo_unexecuted_blocks=1 00:09:15.314 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:15.314 ' 00:09:15.314 16:33:19 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.314 --rc genhtml_branch_coverage=1 00:09:15.314 --rc genhtml_function_coverage=1 00:09:15.314 --rc genhtml_legend=1 00:09:15.314 --rc geninfo_all_blocks=1 00:09:15.314 --rc geninfo_unexecuted_blocks=1 00:09:15.314 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:15.314 ' 00:09:15.314 16:33:19 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:15.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.315 --rc genhtml_branch_coverage=1 00:09:15.315 --rc genhtml_function_coverage=1 00:09:15.315 --rc genhtml_legend=1 00:09:15.315 --rc geninfo_all_blocks=1 00:09:15.315 --rc geninfo_unexecuted_blocks=1 00:09:15.315 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:15.315 ' 00:09:15.315 16:33:19 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.315 --rc genhtml_branch_coverage=1 00:09:15.315 --rc genhtml_function_coverage=1 00:09:15.315 --rc genhtml_legend=1 00:09:15.315 --rc geninfo_all_blocks=1 00:09:15.315 --rc geninfo_unexecuted_blocks=1 00:09:15.315 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:15.315 ' 00:09:15.315 16:33:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.315 16:33:19 -- nvmf/common.sh@7 -- # uname -s 00:09:15.315 16:33:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.315 16:33:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.315 16:33:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.315 16:33:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.315 16:33:19 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.315 16:33:19 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:15.315 16:33:19 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.315 16:33:19 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:15.573 16:33:19 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:09:15.573 16:33:19 -- nvmf/common.sh@16 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:09:15.573 16:33:19 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.573 16:33:19 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:15.573 16:33:19 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:15.573 16:33:19 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.573 16:33:19 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:15.573 16:33:19 -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.573 16:33:19 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.573 16:33:19 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.573 16:33:19 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.573 16:33:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.573 16:33:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.573 16:33:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.573 16:33:19 -- paths/export.sh@5 -- # export PATH 00:09:15.573 16:33:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.573 16:33:19 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/setup.sh 00:09:15.573 16:33:19 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:15.573 16:33:19 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:15.573 16:33:19 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:15.573 16:33:19 -- nvmf/common.sh@50 -- # : 0 00:09:15.573 16:33:19 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:15.573 16:33:19 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:15.573 16:33:19 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:15.573 16:33:19 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.573 16:33:19 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.573 16:33:19 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:15.573 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:15.573 16:33:19 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:15.573 16:33:19 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:15.573 16:33:19 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:15.573 16:33:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:09:15.573 16:33:19 -- spdk/autotest.sh@32 -- # uname -s 00:09:15.573 16:33:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:09:15.573 16:33:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:09:15.573 16:33:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:09:15.573 16:33:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:09:15.573 16:33:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:09:15.573 16:33:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:09:15.573 16:33:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:09:15.573 16:33:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:09:15.573 16:33:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:09:15.573 16:33:19 -- spdk/autotest.sh@48 -- # udevadm_pid=3454130 00:09:15.573 16:33:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:09:15.573 16:33:19 -- pm/common@17 -- # local monitor 00:09:15.573 16:33:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.573 16:33:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.573 16:33:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.573 16:33:19 -- pm/common@21 -- # date +%s 00:09:15.573 16:33:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.573 16:33:19 -- pm/common@21 -- # date +%s 00:09:15.573 16:33:19 -- pm/common@21 -- # date +%s 00:09:15.573 16:33:19 -- pm/common@25 -- # sleep 1 00:09:15.573 16:33:19 -- pm/common@21 -- # date +%s 00:09:15.573 16:33:19 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820799 00:09:15.573 16:33:19 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820799 00:09:15.573 16:33:19 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820799 00:09:15.573 16:33:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730820799 00:09:15.574 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820799_collect-vmstat.pm.log 00:09:15.574 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820799_collect-cpu-temp.pm.log 00:09:15.574 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820799_collect-cpu-load.pm.log 00:09:15.574 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730820799_collect-bmc-pm.bmc.pm.log 00:09:16.511 16:33:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:09:16.511 16:33:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:09:16.511 16:33:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.511 16:33:20 -- common/autotest_common.sh@10 -- # set +x 00:09:16.511 16:33:20 -- spdk/autotest.sh@59 -- # create_test_list 00:09:16.511 16:33:20 -- common/autotest_common.sh@750 -- # xtrace_disable 00:09:16.511 16:33:20 -- common/autotest_common.sh@10 -- # set +x 00:09:16.511 16:33:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:09:16.511 16:33:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:16.511 16:33:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:16.511 16:33:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:16.511 16:33:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:16.512 16:33:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:09:16.512 16:33:21 -- common/autotest_common.sh@1455 -- # uname 00:09:16.512 16:33:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:09:16.512 16:33:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:09:16.512 16:33:21 -- common/autotest_common.sh@1475 -- # uname 00:09:16.512 16:33:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:09:16.512 16:33:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:09:16.512 16:33:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:09:16.771 lcov: LCOV version 1.15 00:09:16.771 16:33:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:09:24.893 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:09:25.829 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:09:33.945 16:33:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:33.945 16:33:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.945 16:33:38 -- common/autotest_common.sh@10 -- # set +x 00:09:33.945 16:33:38 -- spdk/autotest.sh@78 -- # rm -f 00:09:33.945 16:33:38 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:37.236 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:09:37.496 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:09:37.496 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:09:37.755 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:09:40.290 16:33:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:40.290 16:33:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:40.290 16:33:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:40.290 16:33:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:40.290 16:33:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:40.290 16:33:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:40.290 16:33:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:40.290 16:33:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:40.290 16:33:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:40.290 16:33:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:40.290 16:33:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:40.290 16:33:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:40.290 16:33:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:40.290 16:33:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:40.290 16:33:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:40.290 No valid GPT data, bailing 00:09:40.290 16:33:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:40.290 16:33:44 -- scripts/common.sh@394 -- # pt= 00:09:40.290 16:33:44 -- scripts/common.sh@395 -- # return 1 00:09:40.290 16:33:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:40.290 1+0 records in 00:09:40.290 1+0 records out 00:09:40.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00206368 s, 508 MB/s 00:09:40.290 16:33:44 -- spdk/autotest.sh@105 -- # sync 00:09:40.290 16:33:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:40.290 16:33:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:40.290 16:33:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:46.862 16:33:50 -- spdk/autotest.sh@111 -- # uname -s 00:09:46.862 16:33:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:46.862 16:33:50 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:09:46.862 16:33:50 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:09:46.862 16:33:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:46.862 16:33:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.862 16:33:50 -- common/autotest_common.sh@10 -- # set +x 00:09:46.862 ************************************ 00:09:46.862 START TEST setup.sh 00:09:46.862 ************************************ 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:09:46.862 * Looking for test storage... 00:09:46.862 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@345 -- # : 1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@353 -- # local d=1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@355 -- # echo 1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@353 -- # local d=2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@355 -- # echo 2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.862 16:33:50 setup.sh -- scripts/common.sh@368 -- # return 0 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.862 --rc genhtml_branch_coverage=1 00:09:46.862 --rc genhtml_function_coverage=1 00:09:46.862 --rc genhtml_legend=1 00:09:46.862 --rc geninfo_all_blocks=1 00:09:46.862 --rc geninfo_unexecuted_blocks=1 00:09:46.862 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.862 ' 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.862 --rc genhtml_branch_coverage=1 00:09:46.862 --rc genhtml_function_coverage=1 00:09:46.862 --rc genhtml_legend=1 00:09:46.862 --rc geninfo_all_blocks=1 00:09:46.862 --rc geninfo_unexecuted_blocks=1 00:09:46.862 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.862 ' 00:09:46.862 16:33:50 setup.sh -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.862 --rc genhtml_branch_coverage=1 00:09:46.862 --rc genhtml_function_coverage=1 00:09:46.862 --rc genhtml_legend=1 00:09:46.863 --rc geninfo_all_blocks=1 00:09:46.863 --rc geninfo_unexecuted_blocks=1 00:09:46.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.863 ' 00:09:46.863 16:33:50 setup.sh -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.863 --rc genhtml_branch_coverage=1 00:09:46.863 --rc genhtml_function_coverage=1 00:09:46.863 --rc genhtml_legend=1 00:09:46.863 --rc geninfo_all_blocks=1 00:09:46.863 --rc geninfo_unexecuted_blocks=1 00:09:46.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.863 ' 00:09:46.863 16:33:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:09:46.863 16:33:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:09:46.863 16:33:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:09:46.863 16:33:50 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:46.863 16:33:50 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:46.863 16:33:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:09:46.863 ************************************ 00:09:46.863 START TEST acl 00:09:46.863 ************************************ 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:09:46.863 * Looking for test storage... 00:09:46.863 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1691 -- # lcov --version 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.863 16:33:50 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:46.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.863 --rc genhtml_branch_coverage=1 00:09:46.863 --rc genhtml_function_coverage=1 00:09:46.863 --rc genhtml_legend=1 00:09:46.863 --rc geninfo_all_blocks=1 00:09:46.863 --rc geninfo_unexecuted_blocks=1 00:09:46.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.863 ' 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:46.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.863 --rc genhtml_branch_coverage=1 00:09:46.863 --rc genhtml_function_coverage=1 00:09:46.863 --rc genhtml_legend=1 00:09:46.863 --rc geninfo_all_blocks=1 00:09:46.863 --rc geninfo_unexecuted_blocks=1 00:09:46.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.863 ' 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:46.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.863 --rc genhtml_branch_coverage=1 00:09:46.863 --rc genhtml_function_coverage=1 00:09:46.863 --rc genhtml_legend=1 00:09:46.863 --rc geninfo_all_blocks=1 00:09:46.863 --rc geninfo_unexecuted_blocks=1 00:09:46.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.863 ' 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:46.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.863 --rc genhtml_branch_coverage=1 00:09:46.863 --rc genhtml_function_coverage=1 00:09:46.863 --rc genhtml_legend=1 00:09:46.863 --rc geninfo_all_blocks=1 00:09:46.863 --rc geninfo_unexecuted_blocks=1 00:09:46.863 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:46.863 ' 00:09:46.863 16:33:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:46.863 16:33:50 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:46.863 16:33:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:09:46.863 16:33:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:09:46.863 16:33:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:09:46.863 16:33:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:09:46.863 16:33:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:09:46.863 16:33:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:09:46.863 16:33:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:09:53.447 16:33:57 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:09:53.447 16:33:57 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:09:53.447 16:33:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:53.447 16:33:57 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:09:53.447 16:33:57 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:09:53.447 16:33:57 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:09:56.733 Hugepages 00:09:56.733 node hugesize free / total 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 00:09:56.733 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.733 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:09:56.734 16:34:00 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:09:56.734 16:34:01 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:09:56.734 16:34:01 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:09:56.734 16:34:01 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:56.734 16:34:01 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:56.734 16:34:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:09:56.734 ************************************ 00:09:56.734 START TEST denied 00:09:56.734 ************************************ 00:09:56.734 16:34:01 setup.sh.acl.denied -- common/autotest_common.sh@1127 -- # denied 00:09:56.734 16:34:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:09:56.734 16:34:01 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:09:56.734 16:34:01 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:09:56.734 16:34:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:09:56.734 16:34:01 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:10:03.298 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:03.298 16:34:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:10:11.423 00:10:11.423 real 0m13.482s 00:10:11.423 user 0m4.149s 00:10:11.423 sys 0m8.509s 00:10:11.423 16:34:14 setup.sh.acl.denied -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.423 16:34:14 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:10:11.423 ************************************ 00:10:11.423 END TEST denied 00:10:11.423 ************************************ 00:10:11.423 16:34:14 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:10:11.423 16:34:14 setup.sh.acl -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:11.423 16:34:14 setup.sh.acl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.423 16:34:14 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:11.423 ************************************ 00:10:11.423 START TEST allowed 00:10:11.423 ************************************ 00:10:11.423 16:34:14 setup.sh.acl.allowed -- common/autotest_common.sh@1127 -- # allowed 00:10:11.423 16:34:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:10:11.423 16:34:14 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:10:11.423 16:34:14 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:10:11.423 16:34:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:10:11.423 16:34:14 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:10:19.543 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:10:19.543 16:34:23 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:10:19.543 16:34:23 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:10:19.543 16:34:23 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:10:19.543 16:34:23 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:19.543 16:34:23 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:10:26.124 00:10:26.124 real 0m15.522s 00:10:26.124 user 0m3.670s 00:10:26.124 sys 0m8.417s 00:10:26.124 16:34:30 setup.sh.acl.allowed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:26.124 16:34:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:10:26.124 ************************************ 00:10:26.124 END TEST allowed 00:10:26.124 ************************************ 00:10:26.124 00:10:26.124 real 0m39.496s 00:10:26.124 user 0m11.378s 00:10:26.124 sys 0m24.106s 00:10:26.124 16:34:30 setup.sh.acl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:26.124 16:34:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:10:26.124 ************************************ 00:10:26.124 END TEST acl 00:10:26.124 ************************************ 00:10:26.124 16:34:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:10:26.124 16:34:30 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:26.124 16:34:30 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.124 16:34:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:10:26.124 ************************************ 00:10:26.124 START TEST hugepages 00:10:26.124 ************************************ 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:10:26.124 * Looking for test storage... 00:10:26.124 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lcov --version 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.124 16:34:30 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:26.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.124 --rc genhtml_branch_coverage=1 00:10:26.124 --rc genhtml_function_coverage=1 00:10:26.124 --rc genhtml_legend=1 00:10:26.124 --rc geninfo_all_blocks=1 00:10:26.124 --rc geninfo_unexecuted_blocks=1 00:10:26.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.124 ' 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:26.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.124 --rc genhtml_branch_coverage=1 00:10:26.124 --rc genhtml_function_coverage=1 00:10:26.124 --rc genhtml_legend=1 00:10:26.124 --rc geninfo_all_blocks=1 00:10:26.124 --rc geninfo_unexecuted_blocks=1 00:10:26.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.124 ' 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:26.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.124 --rc genhtml_branch_coverage=1 00:10:26.124 --rc genhtml_function_coverage=1 00:10:26.124 --rc genhtml_legend=1 00:10:26.124 --rc geninfo_all_blocks=1 00:10:26.124 --rc geninfo_unexecuted_blocks=1 00:10:26.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.124 ' 00:10:26.124 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:26.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.124 --rc genhtml_branch_coverage=1 00:10:26.124 --rc genhtml_function_coverage=1 00:10:26.124 --rc genhtml_legend=1 00:10:26.124 --rc geninfo_all_blocks=1 00:10:26.124 --rc geninfo_unexecuted_blocks=1 00:10:26.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:26.124 ' 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:10:26.124 16:34:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 64294192 kB' 'MemAvailable: 70312080 kB' 'Buffers: 30740 kB' 'Cached: 20054260 kB' 'SwapCached: 0 kB' 'Active: 14903580 kB' 'Inactive: 5750580 kB' 'Active(anon): 14388108 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573024 kB' 'Mapped: 179176 kB' 'Shmem: 13818948 kB' 'KReclaimable: 587756 kB' 'Slab: 1231792 kB' 'SReclaimable: 587756 kB' 'SUnreclaim: 644036 kB' 'KernelStack: 17792 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434172 kB' 'Committed_AS: 15701844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215200 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.125 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:10:26.126 16:34:30 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:10:26.126 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:26.126 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.126 16:34:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:26.126 ************************************ 00:10:26.126 START TEST single_node_setup 00:10:26.126 ************************************ 00:10:26.126 16:34:30 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1127 -- # single_node_setup 00:10:26.126 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:10:26.126 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:10:26.126 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:10:26.126 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:10:26.127 16:34:30 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:30.407 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:30.407 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:33.695 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66457580 kB' 'MemAvailable: 72475436 kB' 'Buffers: 30740 kB' 'Cached: 20054444 kB' 'SwapCached: 0 kB' 'Active: 14904088 kB' 'Inactive: 5750580 kB' 'Active(anon): 14388616 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572868 kB' 'Mapped: 179156 kB' 'Shmem: 13819132 kB' 'KReclaimable: 587724 kB' 'Slab: 1230480 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642756 kB' 'KernelStack: 17632 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15702712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214944 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.608 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.609 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66457636 kB' 'MemAvailable: 72475492 kB' 'Buffers: 30740 kB' 'Cached: 20054448 kB' 'SwapCached: 0 kB' 'Active: 14904056 kB' 'Inactive: 5750580 kB' 'Active(anon): 14388584 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572856 kB' 'Mapped: 179112 kB' 'Shmem: 13819136 kB' 'KReclaimable: 587724 kB' 'Slab: 1230480 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642756 kB' 'KernelStack: 17600 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15702732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214928 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.610 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.611 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66457636 kB' 'MemAvailable: 72475492 kB' 'Buffers: 30740 kB' 'Cached: 20054448 kB' 'SwapCached: 0 kB' 'Active: 14904204 kB' 'Inactive: 5750580 kB' 'Active(anon): 14388732 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573004 kB' 'Mapped: 179112 kB' 'Shmem: 13819136 kB' 'KReclaimable: 587724 kB' 'Slab: 1230480 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642756 kB' 'KernelStack: 17600 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15702388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214928 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.612 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.613 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:10:35.614 nr_hugepages=1024 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:10:35.614 resv_hugepages=0 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:10:35.614 surplus_hugepages=0 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:10:35.614 anon_hugepages=0 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66455812 kB' 'MemAvailable: 72473668 kB' 'Buffers: 30740 kB' 'Cached: 20054508 kB' 'SwapCached: 0 kB' 'Active: 14907028 kB' 'Inactive: 5750580 kB' 'Active(anon): 14391556 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575736 kB' 'Mapped: 179616 kB' 'Shmem: 13819196 kB' 'KReclaimable: 587724 kB' 'Slab: 1230480 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642756 kB' 'KernelStack: 17600 kB' 'PageTables: 8524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15706772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214928 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.614 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:39 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.615 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35907968 kB' 'MemUsed: 12156896 kB' 'SwapCached: 0 kB' 'Active: 6804220 kB' 'Inactive: 1198740 kB' 'Active(anon): 6571960 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7567664 kB' 'Mapped: 89864 kB' 'AnonPages: 438436 kB' 'Shmem: 6136664 kB' 'KernelStack: 10200 kB' 'PageTables: 5644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271872 kB' 'Slab: 611252 kB' 'SReclaimable: 271872 kB' 'SUnreclaim: 339380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.616 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:35.617 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:10:35.618 node0=1024 expecting 1024 00:10:35.618 16:34:40 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:10:35.618 00:10:35.618 real 0m9.478s 00:10:35.618 user 0m2.149s 00:10:35.618 sys 0m4.193s 00:10:35.618 16:34:40 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.618 16:34:40 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:10:35.618 ************************************ 00:10:35.618 END TEST single_node_setup 00:10:35.618 ************************************ 00:10:35.618 16:34:40 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:10:35.618 16:34:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:35.618 16:34:40 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.618 16:34:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:35.618 ************************************ 00:10:35.618 START TEST even_2G_alloc 00:10:35.618 ************************************ 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1127 -- # even_2G_alloc 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:35.618 16:34:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:38.908 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:10:38.908 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:10:38.908 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:41.448 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66451332 kB' 'MemAvailable: 72469188 kB' 'Buffers: 30740 kB' 'Cached: 20054632 kB' 'SwapCached: 0 kB' 'Active: 14904908 kB' 'Inactive: 5750580 kB' 'Active(anon): 14389436 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573428 kB' 'Mapped: 178328 kB' 'Shmem: 13819320 kB' 'KReclaimable: 587724 kB' 'Slab: 1230604 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642880 kB' 'KernelStack: 17568 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15693484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215104 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.449 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66451752 kB' 'MemAvailable: 72469608 kB' 'Buffers: 30740 kB' 'Cached: 20054636 kB' 'SwapCached: 0 kB' 'Active: 14905108 kB' 'Inactive: 5750580 kB' 'Active(anon): 14389636 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573668 kB' 'Mapped: 178292 kB' 'Shmem: 13819324 kB' 'KReclaimable: 587724 kB' 'Slab: 1230604 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642880 kB' 'KernelStack: 17568 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15693504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215088 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.450 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.451 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66452004 kB' 'MemAvailable: 72469860 kB' 'Buffers: 30740 kB' 'Cached: 20054636 kB' 'SwapCached: 0 kB' 'Active: 14906304 kB' 'Inactive: 5750580 kB' 'Active(anon): 14390832 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574940 kB' 'Mapped: 178964 kB' 'Shmem: 13819324 kB' 'KReclaimable: 587724 kB' 'Slab: 1230604 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642880 kB' 'KernelStack: 17584 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15695672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215072 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.452 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.453 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:10:41.454 nr_hugepages=1024 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:10:41.454 resv_hugepages=0 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:10:41.454 surplus_hugepages=0 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:10:41.454 anon_hugepages=0 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66444888 kB' 'MemAvailable: 72462744 kB' 'Buffers: 30740 kB' 'Cached: 20054692 kB' 'SwapCached: 0 kB' 'Active: 14910348 kB' 'Inactive: 5750580 kB' 'Active(anon): 14394876 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 578852 kB' 'Mapped: 179008 kB' 'Shmem: 13819380 kB' 'KReclaimable: 587724 kB' 'Slab: 1230604 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642880 kB' 'KernelStack: 17552 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15699664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215076 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.454 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.455 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 36948200 kB' 'MemUsed: 11116664 kB' 'SwapCached: 0 kB' 'Active: 6803176 kB' 'Inactive: 1198740 kB' 'Active(anon): 6570916 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7567712 kB' 'Mapped: 89464 kB' 'AnonPages: 437420 kB' 'Shmem: 6136712 kB' 'KernelStack: 10168 kB' 'PageTables: 5452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271872 kB' 'Slab: 611568 kB' 'SReclaimable: 271872 kB' 'SUnreclaim: 339696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.456 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:41.457 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220576 kB' 'MemFree: 29501280 kB' 'MemUsed: 14719296 kB' 'SwapCached: 0 kB' 'Active: 8102056 kB' 'Inactive: 4551840 kB' 'Active(anon): 7818844 kB' 'Inactive(anon): 0 kB' 'Active(file): 283212 kB' 'Inactive(file): 4551840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12517744 kB' 'Mapped: 88828 kB' 'AnonPages: 136224 kB' 'Shmem: 7682692 kB' 'KernelStack: 7384 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315852 kB' 'Slab: 619036 kB' 'SReclaimable: 315852 kB' 'SUnreclaim: 303184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.458 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:10:41.459 node0=512 expecting 512 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:10:41.459 node1=512 expecting 512 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:10:41.459 00:10:41.459 real 0m5.493s 00:10:41.459 user 0m1.568s 00:10:41.459 sys 0m3.738s 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:41.459 16:34:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:41.459 ************************************ 00:10:41.459 END TEST even_2G_alloc 00:10:41.459 ************************************ 00:10:41.459 16:34:45 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:10:41.459 16:34:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:41.459 16:34:45 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:41.459 16:34:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:41.459 ************************************ 00:10:41.459 START TEST odd_alloc 00:10:41.459 ************************************ 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1127 -- # odd_alloc 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:41.459 16:34:45 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:44.774 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:10:44.774 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:10:44.774 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:10:45.032 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:10:45.032 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:10:45.032 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:10:47.566 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:10:47.566 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66448856 kB' 'MemAvailable: 72466712 kB' 'Buffers: 30740 kB' 'Cached: 20054840 kB' 'SwapCached: 0 kB' 'Active: 14907012 kB' 'Inactive: 5750580 kB' 'Active(anon): 14391540 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575496 kB' 'Mapped: 178384 kB' 'Shmem: 13819528 kB' 'KReclaimable: 587724 kB' 'Slab: 1230956 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643232 kB' 'KernelStack: 17568 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15694388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215024 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.567 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66448884 kB' 'MemAvailable: 72466740 kB' 'Buffers: 30740 kB' 'Cached: 20054844 kB' 'SwapCached: 0 kB' 'Active: 14907356 kB' 'Inactive: 5750580 kB' 'Active(anon): 14391884 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575888 kB' 'Mapped: 178384 kB' 'Shmem: 13819532 kB' 'KReclaimable: 587724 kB' 'Slab: 1230956 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643232 kB' 'KernelStack: 17568 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15694404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215008 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.568 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.569 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66449388 kB' 'MemAvailable: 72467244 kB' 'Buffers: 30740 kB' 'Cached: 20054860 kB' 'SwapCached: 0 kB' 'Active: 14907372 kB' 'Inactive: 5750580 kB' 'Active(anon): 14391900 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575884 kB' 'Mapped: 178384 kB' 'Shmem: 13819548 kB' 'KReclaimable: 587724 kB' 'Slab: 1230956 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643232 kB' 'KernelStack: 17568 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15694428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215008 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.570 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.571 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:10:47.572 nr_hugepages=1025 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:10:47.572 resv_hugepages=0 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:10:47.572 surplus_hugepages=0 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:10:47.572 anon_hugepages=0 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66454992 kB' 'MemAvailable: 72472848 kB' 'Buffers: 30740 kB' 'Cached: 20054896 kB' 'SwapCached: 0 kB' 'Active: 14907032 kB' 'Inactive: 5750580 kB' 'Active(anon): 14391560 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575460 kB' 'Mapped: 178384 kB' 'Shmem: 13819584 kB' 'KReclaimable: 587724 kB' 'Slab: 1230924 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643200 kB' 'KernelStack: 17552 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 15694448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215008 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.572 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.573 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 36927544 kB' 'MemUsed: 11137320 kB' 'SwapCached: 0 kB' 'Active: 6806592 kB' 'Inactive: 1198740 kB' 'Active(anon): 6574332 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7567760 kB' 'Mapped: 89544 kB' 'AnonPages: 440964 kB' 'Shmem: 6136760 kB' 'KernelStack: 10200 kB' 'PageTables: 5592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271872 kB' 'Slab: 611432 kB' 'SReclaimable: 271872 kB' 'SUnreclaim: 339560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.574 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220576 kB' 'MemFree: 29527952 kB' 'MemUsed: 14692624 kB' 'SwapCached: 0 kB' 'Active: 8100720 kB' 'Inactive: 4551840 kB' 'Active(anon): 7817508 kB' 'Inactive(anon): 0 kB' 'Active(file): 283212 kB' 'Inactive(file): 4551840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12517900 kB' 'Mapped: 88840 kB' 'AnonPages: 134748 kB' 'Shmem: 7682848 kB' 'KernelStack: 7384 kB' 'PageTables: 2736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315852 kB' 'Slab: 619492 kB' 'SReclaimable: 315852 kB' 'SUnreclaim: 303640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.575 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.576 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:10:47.577 node0=513 expecting 513 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:10:47.577 node1=512 expecting 512 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:10:47.577 00:10:47.577 real 0m6.210s 00:10:47.577 user 0m2.207s 00:10:47.577 sys 0m3.994s 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:47.577 16:34:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:47.577 ************************************ 00:10:47.577 END TEST odd_alloc 00:10:47.577 ************************************ 00:10:47.577 16:34:51 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:10:47.577 16:34:51 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:47.577 16:34:51 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:47.577 16:34:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:47.577 ************************************ 00:10:47.577 START TEST custom_alloc 00:10:47.577 ************************************ 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1127 -- # custom_alloc 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:10:47.577 16:34:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:47.577 16:34:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:51.767 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:10:51.767 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:10:51.767 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65419104 kB' 'MemAvailable: 71436960 kB' 'Buffers: 30740 kB' 'Cached: 20055044 kB' 'SwapCached: 0 kB' 'Active: 14908800 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393328 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576804 kB' 'Mapped: 178432 kB' 'Shmem: 13819732 kB' 'KReclaimable: 587724 kB' 'Slab: 1231736 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 644012 kB' 'KernelStack: 17568 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15695232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215168 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.681 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.682 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65419732 kB' 'MemAvailable: 71437588 kB' 'Buffers: 30740 kB' 'Cached: 20055048 kB' 'SwapCached: 0 kB' 'Active: 14909180 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393708 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577232 kB' 'Mapped: 178432 kB' 'Shmem: 13819736 kB' 'KReclaimable: 587724 kB' 'Slab: 1231736 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 644012 kB' 'KernelStack: 17584 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15695248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215152 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.683 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:53.684 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65421144 kB' 'MemAvailable: 71439000 kB' 'Buffers: 30740 kB' 'Cached: 20055064 kB' 'SwapCached: 0 kB' 'Active: 14909216 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393744 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577236 kB' 'Mapped: 178432 kB' 'Shmem: 13819752 kB' 'KReclaimable: 587724 kB' 'Slab: 1231736 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 644012 kB' 'KernelStack: 17584 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15695272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215152 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.685 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.686 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.950 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:10:53.951 nr_hugepages=1536 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:10:53.951 resv_hugepages=0 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:10:53.951 surplus_hugepages=0 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:10:53.951 anon_hugepages=0 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 65422488 kB' 'MemAvailable: 71440344 kB' 'Buffers: 30740 kB' 'Cached: 20055084 kB' 'SwapCached: 0 kB' 'Active: 14909236 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393764 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577236 kB' 'Mapped: 178432 kB' 'Shmem: 13819772 kB' 'KReclaimable: 587724 kB' 'Slab: 1231736 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 644012 kB' 'KernelStack: 17584 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 15695292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215152 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.951 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.952 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 36931136 kB' 'MemUsed: 11133728 kB' 'SwapCached: 0 kB' 'Active: 6808284 kB' 'Inactive: 1198740 kB' 'Active(anon): 6576024 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7567844 kB' 'Mapped: 89604 kB' 'AnonPages: 442388 kB' 'Shmem: 6136844 kB' 'KernelStack: 10216 kB' 'PageTables: 5660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271872 kB' 'Slab: 612624 kB' 'SReclaimable: 271872 kB' 'SUnreclaim: 340752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.953 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220576 kB' 'MemFree: 28490864 kB' 'MemUsed: 15729712 kB' 'SwapCached: 0 kB' 'Active: 8100904 kB' 'Inactive: 4551840 kB' 'Active(anon): 7817692 kB' 'Inactive(anon): 0 kB' 'Active(file): 283212 kB' 'Inactive(file): 4551840 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12518024 kB' 'Mapped: 88828 kB' 'AnonPages: 134808 kB' 'Shmem: 7682972 kB' 'KernelStack: 7352 kB' 'PageTables: 2696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 315852 kB' 'Slab: 619112 kB' 'SReclaimable: 315852 kB' 'SUnreclaim: 303260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.954 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.955 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:10:53.956 node0=512 expecting 512 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:10:53.956 node1=1024 expecting 1024 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:10:53.956 00:10:53.956 real 0m6.388s 00:10:53.956 user 0m2.092s 00:10:53.956 sys 0m4.123s 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.956 16:34:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:10:53.956 ************************************ 00:10:53.956 END TEST custom_alloc 00:10:53.956 ************************************ 00:10:53.956 16:34:58 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:10:53.956 16:34:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:53.956 16:34:58 setup.sh.hugepages -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.956 16:34:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:10:53.956 ************************************ 00:10:53.956 START TEST no_shrink_alloc 00:10:53.956 ************************************ 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1127 -- # no_shrink_alloc 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:10:53.956 16:34:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:10:58.157 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:10:58.157 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:10:58.157 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66477624 kB' 'MemAvailable: 72495480 kB' 'Buffers: 30740 kB' 'Cached: 20055252 kB' 'SwapCached: 0 kB' 'Active: 14907024 kB' 'Inactive: 5750580 kB' 'Active(anon): 14391552 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575212 kB' 'Mapped: 178584 kB' 'Shmem: 13819940 kB' 'KReclaimable: 587724 kB' 'Slab: 1231240 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643516 kB' 'KernelStack: 17696 kB' 'PageTables: 8528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15698844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215264 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.099 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.100 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66474420 kB' 'MemAvailable: 72492276 kB' 'Buffers: 30740 kB' 'Cached: 20055256 kB' 'SwapCached: 0 kB' 'Active: 14907900 kB' 'Inactive: 5750580 kB' 'Active(anon): 14392428 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575748 kB' 'Mapped: 179012 kB' 'Shmem: 13819944 kB' 'KReclaimable: 587724 kB' 'Slab: 1231232 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643508 kB' 'KernelStack: 17664 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15700084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215232 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.101 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.102 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66467896 kB' 'MemAvailable: 72485752 kB' 'Buffers: 30740 kB' 'Cached: 20055276 kB' 'SwapCached: 0 kB' 'Active: 14911664 kB' 'Inactive: 5750580 kB' 'Active(anon): 14396192 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 579904 kB' 'Mapped: 179012 kB' 'Shmem: 13819964 kB' 'KReclaimable: 587724 kB' 'Slab: 1231200 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643476 kB' 'KernelStack: 17904 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15703540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215312 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.103 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.104 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:11:00.105 nr_hugepages=1024 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:11:00.105 resv_hugepages=0 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:11:00.105 surplus_hugepages=0 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:11:00.105 anon_hugepages=0 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66468152 kB' 'MemAvailable: 72486008 kB' 'Buffers: 30740 kB' 'Cached: 20055276 kB' 'SwapCached: 0 kB' 'Active: 14913600 kB' 'Inactive: 5750580 kB' 'Active(anon): 14398128 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581840 kB' 'Mapped: 179012 kB' 'Shmem: 13819964 kB' 'KReclaimable: 587724 kB' 'Slab: 1231200 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 643476 kB' 'KernelStack: 18128 kB' 'PageTables: 9804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15705024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215248 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.105 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.106 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35906208 kB' 'MemUsed: 12158656 kB' 'SwapCached: 0 kB' 'Active: 6806380 kB' 'Inactive: 1198740 kB' 'Active(anon): 6574120 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7567912 kB' 'Mapped: 89680 kB' 'AnonPages: 440376 kB' 'Shmem: 6136912 kB' 'KernelStack: 10680 kB' 'PageTables: 6924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271872 kB' 'Slab: 611636 kB' 'SReclaimable: 271872 kB' 'SUnreclaim: 339764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.107 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:00.108 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:11:00.109 node0=1024 expecting 1024 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:00.109 16:35:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:11:03.401 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:11:03.401 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:11:03.401 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:11:05.962 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66472184 kB' 'MemAvailable: 72490040 kB' 'Buffers: 30740 kB' 'Cached: 20055432 kB' 'SwapCached: 0 kB' 'Active: 14909312 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393840 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577380 kB' 'Mapped: 178620 kB' 'Shmem: 13820120 kB' 'KReclaimable: 587724 kB' 'Slab: 1230276 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642552 kB' 'KernelStack: 17616 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15696692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215136 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.962 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:05.963 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66475544 kB' 'MemAvailable: 72493400 kB' 'Buffers: 30740 kB' 'Cached: 20055436 kB' 'SwapCached: 0 kB' 'Active: 14909432 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393960 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577512 kB' 'Mapped: 178576 kB' 'Shmem: 13820124 kB' 'KReclaimable: 587724 kB' 'Slab: 1230256 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642532 kB' 'KernelStack: 17600 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15696920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215088 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.964 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.965 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66475544 kB' 'MemAvailable: 72493400 kB' 'Buffers: 30740 kB' 'Cached: 20055452 kB' 'SwapCached: 0 kB' 'Active: 14909176 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393704 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577204 kB' 'Mapped: 178576 kB' 'Shmem: 13820140 kB' 'KReclaimable: 587724 kB' 'Slab: 1230256 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642532 kB' 'KernelStack: 17600 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15696940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215104 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.966 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.967 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:11:05.968 nr_hugepages=1024 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:11:05.968 resv_hugepages=0 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:11:05.968 surplus_hugepages=0 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:11:05.968 anon_hugepages=0 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285440 kB' 'MemFree: 66475292 kB' 'MemAvailable: 72493148 kB' 'Buffers: 30740 kB' 'Cached: 20055476 kB' 'SwapCached: 0 kB' 'Active: 14909012 kB' 'Inactive: 5750580 kB' 'Active(anon): 14393540 kB' 'Inactive(anon): 0 kB' 'Active(file): 515472 kB' 'Inactive(file): 5750580 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577064 kB' 'Mapped: 178576 kB' 'Shmem: 13820164 kB' 'KReclaimable: 587724 kB' 'Slab: 1230256 kB' 'SReclaimable: 587724 kB' 'SUnreclaim: 642532 kB' 'KernelStack: 17616 kB' 'PageTables: 8396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 15696964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 215104 kB' 'VmallocChunk: 0 kB' 'Percpu: 95040 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 753080 kB' 'DirectMap2M: 25137152 kB' 'DirectMap1G: 76546048 kB' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.968 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:11:05.969 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 35890712 kB' 'MemUsed: 12174152 kB' 'SwapCached: 0 kB' 'Active: 6807392 kB' 'Inactive: 1198740 kB' 'Active(anon): 6575132 kB' 'Inactive(anon): 0 kB' 'Active(file): 232260 kB' 'Inactive(file): 1198740 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7567960 kB' 'Mapped: 89748 kB' 'AnonPages: 441544 kB' 'Shmem: 6136960 kB' 'KernelStack: 10200 kB' 'PageTables: 5556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271872 kB' 'Slab: 611432 kB' 'SReclaimable: 271872 kB' 'SUnreclaim: 339560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.970 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:05.971 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:11:06.230 node0=1024 expecting 1024 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:11:06.230 00:11:06.230 real 0m12.087s 00:11:06.230 user 0m3.676s 00:11:06.230 sys 0m7.846s 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.230 16:35:10 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:06.230 ************************************ 00:11:06.230 END TEST no_shrink_alloc 00:11:06.231 ************************************ 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:11:06.231 16:35:10 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:11:06.231 00:11:06.231 real 0m40.315s 00:11:06.231 user 0m11.971s 00:11:06.231 sys 0m24.324s 00:11:06.231 16:35:10 setup.sh.hugepages -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.231 16:35:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:06.231 ************************************ 00:11:06.231 END TEST hugepages 00:11:06.231 ************************************ 00:11:06.231 16:35:10 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:11:06.231 16:35:10 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:06.231 16:35:10 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.231 16:35:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:06.231 ************************************ 00:11:06.231 START TEST driver 00:11:06.231 ************************************ 00:11:06.231 16:35:10 setup.sh.driver -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:11:06.231 * Looking for test storage... 00:11:06.231 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:11:06.231 16:35:10 setup.sh.driver -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:06.231 16:35:10 setup.sh.driver -- common/autotest_common.sh@1691 -- # lcov --version 00:11:06.231 16:35:10 setup.sh.driver -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:06.489 16:35:10 setup.sh.driver -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:06.489 16:35:10 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.489 16:35:10 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.489 16:35:10 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.489 16:35:10 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.489 16:35:10 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.489 16:35:10 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.490 16:35:10 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:11:06.490 16:35:10 setup.sh.driver -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.490 16:35:10 setup.sh.driver -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:06.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.490 --rc genhtml_branch_coverage=1 00:11:06.490 --rc genhtml_function_coverage=1 00:11:06.490 --rc genhtml_legend=1 00:11:06.490 --rc geninfo_all_blocks=1 00:11:06.490 --rc geninfo_unexecuted_blocks=1 00:11:06.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:06.490 ' 00:11:06.490 16:35:10 setup.sh.driver -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:06.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.490 --rc genhtml_branch_coverage=1 00:11:06.490 --rc genhtml_function_coverage=1 00:11:06.490 --rc genhtml_legend=1 00:11:06.490 --rc geninfo_all_blocks=1 00:11:06.490 --rc geninfo_unexecuted_blocks=1 00:11:06.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:06.490 ' 00:11:06.490 16:35:10 setup.sh.driver -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:06.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.490 --rc genhtml_branch_coverage=1 00:11:06.490 --rc genhtml_function_coverage=1 00:11:06.490 --rc genhtml_legend=1 00:11:06.490 --rc geninfo_all_blocks=1 00:11:06.490 --rc geninfo_unexecuted_blocks=1 00:11:06.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:06.490 ' 00:11:06.490 16:35:10 setup.sh.driver -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:06.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.490 --rc genhtml_branch_coverage=1 00:11:06.490 --rc genhtml_function_coverage=1 00:11:06.490 --rc genhtml_legend=1 00:11:06.490 --rc geninfo_all_blocks=1 00:11:06.490 --rc geninfo_unexecuted_blocks=1 00:11:06.490 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:06.490 ' 00:11:06.490 16:35:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:11:06.490 16:35:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:06.490 16:35:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:11:14.611 16:35:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:11:14.611 16:35:18 setup.sh.driver -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:14.611 16:35:18 setup.sh.driver -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.611 16:35:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:14.611 ************************************ 00:11:14.611 START TEST guess_driver 00:11:14.611 ************************************ 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1127 -- # guess_driver 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 238 > 0 )) 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:11:14.611 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:11:14.611 Looking for driver=vfio-pci 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:11:14.611 16:35:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:17.148 16:35:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:20.443 16:35:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:20.443 16:35:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:11:20.443 16:35:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:22.980 16:35:27 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:11:22.980 16:35:27 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:11:22.980 16:35:27 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:22.980 16:35:27 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:11:31.102 00:11:31.102 real 0m15.923s 00:11:31.102 user 0m3.632s 00:11:31.102 sys 0m8.189s 00:11:31.102 16:35:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.102 16:35:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:11:31.102 ************************************ 00:11:31.102 END TEST guess_driver 00:11:31.102 ************************************ 00:11:31.102 00:11:31.102 real 0m23.605s 00:11:31.102 user 0m5.801s 00:11:31.102 sys 0m12.794s 00:11:31.102 16:35:34 setup.sh.driver -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.102 16:35:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:31.102 ************************************ 00:11:31.102 END TEST driver 00:11:31.102 ************************************ 00:11:31.102 16:35:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:11:31.102 16:35:34 setup.sh -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:31.102 16:35:34 setup.sh -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:31.102 16:35:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:31.102 ************************************ 00:11:31.102 START TEST devices 00:11:31.102 ************************************ 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:11:31.102 * Looking for test storage... 00:11:31.102 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1691 -- # lcov --version 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.102 16:35:34 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:31.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.102 --rc genhtml_branch_coverage=1 00:11:31.102 --rc genhtml_function_coverage=1 00:11:31.102 --rc genhtml_legend=1 00:11:31.102 --rc geninfo_all_blocks=1 00:11:31.102 --rc geninfo_unexecuted_blocks=1 00:11:31.102 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:31.102 ' 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:31.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.102 --rc genhtml_branch_coverage=1 00:11:31.102 --rc genhtml_function_coverage=1 00:11:31.102 --rc genhtml_legend=1 00:11:31.102 --rc geninfo_all_blocks=1 00:11:31.102 --rc geninfo_unexecuted_blocks=1 00:11:31.102 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:31.102 ' 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:31.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.102 --rc genhtml_branch_coverage=1 00:11:31.102 --rc genhtml_function_coverage=1 00:11:31.102 --rc genhtml_legend=1 00:11:31.102 --rc geninfo_all_blocks=1 00:11:31.102 --rc geninfo_unexecuted_blocks=1 00:11:31.102 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:31.102 ' 00:11:31.102 16:35:34 setup.sh.devices -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:31.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.102 --rc genhtml_branch_coverage=1 00:11:31.102 --rc genhtml_function_coverage=1 00:11:31.102 --rc genhtml_legend=1 00:11:31.102 --rc geninfo_all_blocks=1 00:11:31.102 --rc geninfo_unexecuted_blocks=1 00:11:31.102 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:31.102 ' 00:11:31.102 16:35:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:11:31.102 16:35:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:11:31.102 16:35:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:31.102 16:35:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:36.460 16:35:41 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:11:36.460 16:35:41 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:36.460 16:35:41 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:11:36.460 16:35:41 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:11:36.720 No valid GPT data, bailing 00:11:36.720 16:35:41 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:36.720 16:35:41 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:11:36.720 16:35:41 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:36.720 16:35:41 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.720 16:35:41 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.720 16:35:41 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:36.720 16:35:41 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:36.720 16:35:41 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:36.720 16:35:41 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:36.720 16:35:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:36.720 ************************************ 00:11:36.720 START TEST nvme_mount 00:11:36.720 ************************************ 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1127 -- # nvme_mount 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:36.720 16:35:41 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:37.658 Creating new GPT entries in memory. 00:11:37.658 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:37.658 other utilities. 00:11:37.658 16:35:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:37.658 16:35:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:37.658 16:35:42 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:37.658 16:35:42 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:37.658 16:35:42 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:11:38.596 Creating new GPT entries in memory. 00:11:38.596 The operation has completed successfully. 00:11:38.596 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:38.596 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:38.596 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3489662 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:11:38.855 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:38.856 16:35:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:42.144 16:35:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:44.688 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:44.688 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:44.947 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:11:44.947 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:11:44.947 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:44.947 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:44.947 16:35:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:49.142 16:35:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:11:51.047 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:51.048 16:35:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:11:55.241 16:35:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:57.145 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:57.145 00:11:57.145 real 0m20.239s 00:11:57.145 user 0m5.632s 00:11:57.145 sys 0m12.108s 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.145 16:36:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 ************************************ 00:11:57.145 END TEST nvme_mount 00:11:57.145 ************************************ 00:11:57.145 16:36:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:57.145 16:36:01 setup.sh.devices -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:57.145 16:36:01 setup.sh.devices -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.145 16:36:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:57.145 ************************************ 00:11:57.145 START TEST dm_mount 00:11:57.145 ************************************ 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1127 -- # dm_mount 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:57.145 16:36:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:58.082 Creating new GPT entries in memory. 00:11:58.082 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:58.082 other utilities. 00:11:58.082 16:36:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:58.082 16:36:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:58.082 16:36:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:58.082 16:36:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:58.082 16:36:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:11:59.019 Creating new GPT entries in memory. 00:11:59.019 The operation has completed successfully. 00:11:59.019 16:36:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:59.019 16:36:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:59.019 16:36:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:59.019 16:36:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:59.019 16:36:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:11:59.957 The operation has completed successfully. 00:11:59.957 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:59.957 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:59.957 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3495176 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:00.216 16:36:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:04.408 16:36:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:06.311 16:36:10 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.601 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:09.602 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:12:09.602 16:36:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:12.135 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:12.135 00:12:12.135 real 0m15.058s 00:12:12.135 user 0m3.888s 00:12:12.135 sys 0m7.973s 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.135 16:36:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:12:12.135 ************************************ 00:12:12.135 END TEST dm_mount 00:12:12.135 ************************************ 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:12.135 16:36:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:12.395 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:12:12.395 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:12:12.395 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:12.395 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:12.395 16:36:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:12.395 00:12:12.395 real 0m42.488s 00:12:12.395 user 0m11.857s 00:12:12.395 sys 0m24.781s 00:12:12.395 16:36:16 setup.sh.devices -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.395 16:36:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:12.395 ************************************ 00:12:12.395 END TEST devices 00:12:12.395 ************************************ 00:12:12.395 00:12:12.395 real 2m26.424s 00:12:12.395 user 0m41.227s 00:12:12.395 sys 1m26.343s 00:12:12.395 16:36:16 setup.sh -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.395 16:36:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:12.395 ************************************ 00:12:12.395 END TEST setup.sh 00:12:12.395 ************************************ 00:12:12.395 16:36:16 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:12:16.609 Hugepages 00:12:16.609 node hugesize free / total 00:12:16.609 node0 1048576kB 0 / 0 00:12:16.609 node0 2048kB 1024 / 1024 00:12:16.609 node1 1048576kB 0 / 0 00:12:16.609 node1 2048kB 1024 / 1024 00:12:16.609 00:12:16.609 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:16.609 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:12:16.609 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:12:16.609 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:12:16.609 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:12:16.609 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:12:16.868 16:36:21 -- spdk/autotest.sh@117 -- # uname -s 00:12:16.868 16:36:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:12:16.868 16:36:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:12:16.868 16:36:21 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:12:21.063 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:12:21.063 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:12:24.356 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:12:26.260 16:36:30 -- common/autotest_common.sh@1515 -- # sleep 1 00:12:27.639 16:36:31 -- common/autotest_common.sh@1516 -- # bdfs=() 00:12:27.639 16:36:31 -- common/autotest_common.sh@1516 -- # local bdfs 00:12:27.639 16:36:31 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:12:27.639 16:36:31 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:12:27.639 16:36:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:27.639 16:36:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:12:27.639 16:36:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:27.639 16:36:31 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:12:27.639 16:36:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:27.639 16:36:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:12:27.639 16:36:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:12:27.639 16:36:31 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:12:31.830 Waiting for block devices as requested 00:12:31.830 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:12:31.830 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:31.830 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:32.089 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:12:32.089 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:12:32.089 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:12:32.349 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:12:32.349 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:12:32.349 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:12:32.608 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:12:32.608 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:12:35.143 16:36:39 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:12:35.143 16:36:39 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1485 -- # grep 0000:1a:00.0/nvme/nvme 00:12:35.143 16:36:39 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:12:35.143 16:36:39 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:12:35.143 16:36:39 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1529 -- # grep oacs 00:12:35.143 16:36:39 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:12:35.143 16:36:39 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:12:35.143 16:36:39 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:12:35.143 16:36:39 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:12:35.143 16:36:39 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:12:35.143 16:36:39 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:12:35.143 16:36:39 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:12:35.143 16:36:39 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:12:35.143 16:36:39 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:12:35.143 16:36:39 -- common/autotest_common.sh@1541 -- # continue 00:12:35.143 16:36:39 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:12:35.143 16:36:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.143 16:36:39 -- common/autotest_common.sh@10 -- # set +x 00:12:35.143 16:36:39 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:12:35.143 16:36:39 -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.143 16:36:39 -- common/autotest_common.sh@10 -- # set +x 00:12:35.144 16:36:39 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:12:39.512 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:12:39.512 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:12:42.803 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:12:44.708 16:36:48 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:12:44.708 16:36:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.708 16:36:48 -- common/autotest_common.sh@10 -- # set +x 00:12:44.708 16:36:48 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:12:44.708 16:36:48 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:12:44.708 16:36:48 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:12:44.708 16:36:48 -- common/autotest_common.sh@1561 -- # bdfs=() 00:12:44.708 16:36:48 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:12:44.708 16:36:48 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:12:44.708 16:36:48 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:12:44.708 16:36:48 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:12:44.708 16:36:48 -- common/autotest_common.sh@1496 -- # bdfs=() 00:12:44.708 16:36:48 -- common/autotest_common.sh@1496 -- # local bdfs 00:12:44.708 16:36:48 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:44.708 16:36:48 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:12:44.708 16:36:48 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:12:44.708 16:36:49 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:12:44.708 16:36:49 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:12:44.708 16:36:49 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:12:44.708 16:36:49 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:12:44.708 16:36:49 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:12:44.708 16:36:49 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:12:44.708 16:36:49 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:12:44.708 16:36:49 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:12:44.708 16:36:49 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:12:44.708 16:36:49 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:12:44.708 16:36:49 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3506861 00:12:44.708 16:36:49 -- common/autotest_common.sh@1583 -- # waitforlisten 3506861 00:12:44.708 16:36:49 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:12:44.708 16:36:49 -- common/autotest_common.sh@833 -- # '[' -z 3506861 ']' 00:12:44.708 16:36:49 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.708 16:36:49 -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:44.708 16:36:49 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.708 16:36:49 -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:44.708 16:36:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.708 [2024-11-05 16:36:49.086352] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:12:44.708 [2024-11-05 16:36:49.086428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506861 ] 00:12:44.708 [2024-11-05 16:36:49.197257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.708 [2024-11-05 16:36:49.258028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.967 16:36:49 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.967 16:36:49 -- common/autotest_common.sh@866 -- # return 0 00:12:44.967 16:36:49 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:12:44.967 16:36:49 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:12:44.967 16:36:49 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:12:48.255 nvme0n1 00:12:48.255 16:36:52 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:12:48.255 [2024-11-05 16:36:52.720605] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:12:48.255 request: 00:12:48.255 { 00:12:48.255 "nvme_ctrlr_name": "nvme0", 00:12:48.255 "password": "test", 00:12:48.255 "method": "bdev_nvme_opal_revert", 00:12:48.255 "req_id": 1 00:12:48.255 } 00:12:48.255 Got JSON-RPC error response 00:12:48.255 response: 00:12:48.255 { 00:12:48.255 "code": -32602, 00:12:48.255 "message": "Invalid parameters" 00:12:48.255 } 00:12:48.255 16:36:52 -- common/autotest_common.sh@1589 -- # true 00:12:48.255 16:36:52 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:12:48.255 16:36:52 -- common/autotest_common.sh@1593 -- # killprocess 3506861 00:12:48.255 16:36:52 -- common/autotest_common.sh@952 -- # '[' -z 3506861 ']' 00:12:48.255 16:36:52 -- common/autotest_common.sh@956 -- # kill -0 3506861 00:12:48.255 16:36:52 -- common/autotest_common.sh@957 -- # uname 00:12:48.255 16:36:52 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:48.255 16:36:52 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3506861 00:12:48.255 16:36:52 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:48.255 16:36:52 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:48.255 16:36:52 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3506861' 00:12:48.255 killing process with pid 3506861 00:12:48.255 16:36:52 -- common/autotest_common.sh@971 -- # kill 3506861 00:12:48.255 16:36:52 -- common/autotest_common.sh@976 -- # wait 3506861 00:12:52.441 16:36:56 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:12:52.441 16:36:56 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:12:52.441 16:36:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:12:52.441 16:36:56 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:12:52.441 16:36:56 -- spdk/autotest.sh@149 -- # timing_enter lib 00:12:52.441 16:36:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.441 16:36:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.441 16:36:56 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:12:52.441 16:36:56 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:12:52.441 16:36:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:52.441 16:36:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.441 16:36:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.441 ************************************ 00:12:52.441 START TEST env 00:12:52.441 ************************************ 00:12:52.441 16:36:56 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:12:52.441 * Looking for test storage... 00:12:52.441 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:12:52.441 16:36:56 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:52.441 16:36:56 env -- common/autotest_common.sh@1691 -- # lcov --version 00:12:52.441 16:36:56 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:52.441 16:36:57 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:52.441 16:36:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.441 16:36:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.441 16:36:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.441 16:36:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.441 16:36:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.441 16:36:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.441 16:36:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.441 16:36:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.441 16:36:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.441 16:36:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.441 16:36:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.441 16:36:57 env -- scripts/common.sh@344 -- # case "$op" in 00:12:52.441 16:36:57 env -- scripts/common.sh@345 -- # : 1 00:12:52.441 16:36:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.441 16:36:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.441 16:36:57 env -- scripts/common.sh@365 -- # decimal 1 00:12:52.441 16:36:57 env -- scripts/common.sh@353 -- # local d=1 00:12:52.441 16:36:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.441 16:36:57 env -- scripts/common.sh@355 -- # echo 1 00:12:52.700 16:36:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.700 16:36:57 env -- scripts/common.sh@366 -- # decimal 2 00:12:52.700 16:36:57 env -- scripts/common.sh@353 -- # local d=2 00:12:52.700 16:36:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.700 16:36:57 env -- scripts/common.sh@355 -- # echo 2 00:12:52.700 16:36:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.700 16:36:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.700 16:36:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.700 16:36:57 env -- scripts/common.sh@368 -- # return 0 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:52.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.700 --rc genhtml_branch_coverage=1 00:12:52.700 --rc genhtml_function_coverage=1 00:12:52.700 --rc genhtml_legend=1 00:12:52.700 --rc geninfo_all_blocks=1 00:12:52.700 --rc geninfo_unexecuted_blocks=1 00:12:52.700 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:52.700 ' 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:52.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.700 --rc genhtml_branch_coverage=1 00:12:52.700 --rc genhtml_function_coverage=1 00:12:52.700 --rc genhtml_legend=1 00:12:52.700 --rc geninfo_all_blocks=1 00:12:52.700 --rc geninfo_unexecuted_blocks=1 00:12:52.700 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:52.700 ' 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:52.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.700 --rc genhtml_branch_coverage=1 00:12:52.700 --rc genhtml_function_coverage=1 00:12:52.700 --rc genhtml_legend=1 00:12:52.700 --rc geninfo_all_blocks=1 00:12:52.700 --rc geninfo_unexecuted_blocks=1 00:12:52.700 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:52.700 ' 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:52.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.700 --rc genhtml_branch_coverage=1 00:12:52.700 --rc genhtml_function_coverage=1 00:12:52.700 --rc genhtml_legend=1 00:12:52.700 --rc geninfo_all_blocks=1 00:12:52.700 --rc geninfo_unexecuted_blocks=1 00:12:52.700 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:12:52.700 ' 00:12:52.700 16:36:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:52.700 16:36:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.700 16:36:57 env -- common/autotest_common.sh@10 -- # set +x 00:12:52.700 ************************************ 00:12:52.700 START TEST env_memory 00:12:52.700 ************************************ 00:12:52.700 16:36:57 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:12:52.700 00:12:52.700 00:12:52.700 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.700 http://cunit.sourceforge.net/ 00:12:52.700 00:12:52.700 00:12:52.700 Suite: memory 00:12:52.700 Test: alloc and free memory map ...[2024-11-05 16:36:57.108785] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:12:52.700 passed 00:12:52.700 Test: mem map translation ...[2024-11-05 16:36:57.127961] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:12:52.700 [2024-11-05 16:36:57.127989] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:12:52.700 [2024-11-05 16:36:57.128034] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:12:52.700 [2024-11-05 16:36:57.128048] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:12:52.700 passed 00:12:52.700 Test: mem map registration ...[2024-11-05 16:36:57.160424] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:12:52.700 [2024-11-05 16:36:57.160446] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:12:52.700 passed 00:12:52.700 Test: mem map adjacent registrations ...passed 00:12:52.700 00:12:52.700 Run Summary: Type Total Ran Passed Failed Inactive 00:12:52.700 suites 1 1 n/a 0 0 00:12:52.700 tests 4 4 4 0 0 00:12:52.700 asserts 152 152 152 0 n/a 00:12:52.700 00:12:52.700 Elapsed time = 0.119 seconds 00:12:52.700 00:12:52.700 real 0m0.133s 00:12:52.700 user 0m0.121s 00:12:52.700 sys 0m0.011s 00:12:52.700 16:36:57 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.700 16:36:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:12:52.700 ************************************ 00:12:52.700 END TEST env_memory 00:12:52.700 ************************************ 00:12:52.700 16:36:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:12:52.701 16:36:57 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:52.701 16:36:57 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.701 16:36:57 env -- common/autotest_common.sh@10 -- # set +x 00:12:52.701 ************************************ 00:12:52.701 START TEST env_vtophys 00:12:52.701 ************************************ 00:12:52.701 16:36:57 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:12:52.958 EAL: lib.eal log level changed from notice to debug 00:12:52.958 EAL: Detected lcore 0 as core 0 on socket 0 00:12:52.958 EAL: Detected lcore 1 as core 1 on socket 0 00:12:52.958 EAL: Detected lcore 2 as core 2 on socket 0 00:12:52.958 EAL: Detected lcore 3 as core 3 on socket 0 00:12:52.958 EAL: Detected lcore 4 as core 4 on socket 0 00:12:52.958 EAL: Detected lcore 5 as core 8 on socket 0 00:12:52.958 EAL: Detected lcore 6 as core 9 on socket 0 00:12:52.958 EAL: Detected lcore 7 as core 10 on socket 0 00:12:52.958 EAL: Detected lcore 8 as core 11 on socket 0 00:12:52.958 EAL: Detected lcore 9 as core 16 on socket 0 00:12:52.958 EAL: Detected lcore 10 as core 17 on socket 0 00:12:52.958 EAL: Detected lcore 11 as core 18 on socket 0 00:12:52.958 EAL: Detected lcore 12 as core 19 on socket 0 00:12:52.958 EAL: Detected lcore 13 as core 20 on socket 0 00:12:52.958 EAL: Detected lcore 14 as core 24 on socket 0 00:12:52.958 EAL: Detected lcore 15 as core 25 on socket 0 00:12:52.958 EAL: Detected lcore 16 as core 26 on socket 0 00:12:52.958 EAL: Detected lcore 17 as core 27 on socket 0 00:12:52.958 EAL: Detected lcore 18 as core 0 on socket 1 00:12:52.958 EAL: Detected lcore 19 as core 1 on socket 1 00:12:52.958 EAL: Detected lcore 20 as core 2 on socket 1 00:12:52.958 EAL: Detected lcore 21 as core 3 on socket 1 00:12:52.958 EAL: Detected lcore 22 as core 4 on socket 1 00:12:52.958 EAL: Detected lcore 23 as core 8 on socket 1 00:12:52.958 EAL: Detected lcore 24 as core 9 on socket 1 00:12:52.958 EAL: Detected lcore 25 as core 10 on socket 1 00:12:52.958 EAL: Detected lcore 26 as core 11 on socket 1 00:12:52.958 EAL: Detected lcore 27 as core 16 on socket 1 00:12:52.958 EAL: Detected lcore 28 as core 17 on socket 1 00:12:52.958 EAL: Detected lcore 29 as core 18 on socket 1 00:12:52.958 EAL: Detected lcore 30 as core 19 on socket 1 00:12:52.958 EAL: Detected lcore 31 as core 20 on socket 1 00:12:52.958 EAL: Detected lcore 32 as core 24 on socket 1 00:12:52.958 EAL: Detected lcore 33 as core 25 on socket 1 00:12:52.958 EAL: Detected lcore 34 as core 26 on socket 1 00:12:52.958 EAL: Detected lcore 35 as core 27 on socket 1 00:12:52.958 EAL: Detected lcore 36 as core 0 on socket 0 00:12:52.958 EAL: Detected lcore 37 as core 1 on socket 0 00:12:52.958 EAL: Detected lcore 38 as core 2 on socket 0 00:12:52.958 EAL: Detected lcore 39 as core 3 on socket 0 00:12:52.958 EAL: Detected lcore 40 as core 4 on socket 0 00:12:52.958 EAL: Detected lcore 41 as core 8 on socket 0 00:12:52.958 EAL: Detected lcore 42 as core 9 on socket 0 00:12:52.958 EAL: Detected lcore 43 as core 10 on socket 0 00:12:52.958 EAL: Detected lcore 44 as core 11 on socket 0 00:12:52.958 EAL: Detected lcore 45 as core 16 on socket 0 00:12:52.958 EAL: Detected lcore 46 as core 17 on socket 0 00:12:52.958 EAL: Detected lcore 47 as core 18 on socket 0 00:12:52.958 EAL: Detected lcore 48 as core 19 on socket 0 00:12:52.958 EAL: Detected lcore 49 as core 20 on socket 0 00:12:52.958 EAL: Detected lcore 50 as core 24 on socket 0 00:12:52.958 EAL: Detected lcore 51 as core 25 on socket 0 00:12:52.958 EAL: Detected lcore 52 as core 26 on socket 0 00:12:52.958 EAL: Detected lcore 53 as core 27 on socket 0 00:12:52.958 EAL: Detected lcore 54 as core 0 on socket 1 00:12:52.959 EAL: Detected lcore 55 as core 1 on socket 1 00:12:52.959 EAL: Detected lcore 56 as core 2 on socket 1 00:12:52.959 EAL: Detected lcore 57 as core 3 on socket 1 00:12:52.959 EAL: Detected lcore 58 as core 4 on socket 1 00:12:52.959 EAL: Detected lcore 59 as core 8 on socket 1 00:12:52.959 EAL: Detected lcore 60 as core 9 on socket 1 00:12:52.959 EAL: Detected lcore 61 as core 10 on socket 1 00:12:52.959 EAL: Detected lcore 62 as core 11 on socket 1 00:12:52.959 EAL: Detected lcore 63 as core 16 on socket 1 00:12:52.959 EAL: Detected lcore 64 as core 17 on socket 1 00:12:52.959 EAL: Detected lcore 65 as core 18 on socket 1 00:12:52.959 EAL: Detected lcore 66 as core 19 on socket 1 00:12:52.959 EAL: Detected lcore 67 as core 20 on socket 1 00:12:52.959 EAL: Detected lcore 68 as core 24 on socket 1 00:12:52.959 EAL: Detected lcore 69 as core 25 on socket 1 00:12:52.959 EAL: Detected lcore 70 as core 26 on socket 1 00:12:52.959 EAL: Detected lcore 71 as core 27 on socket 1 00:12:52.959 EAL: Maximum logical cores by configuration: 128 00:12:52.959 EAL: Detected CPU lcores: 72 00:12:52.959 EAL: Detected NUMA nodes: 2 00:12:52.959 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:12:52.959 EAL: Checking presence of .so 'librte_eal.so.24' 00:12:52.959 EAL: Checking presence of .so 'librte_eal.so' 00:12:52.959 EAL: Detected static linkage of DPDK 00:12:52.959 EAL: No shared files mode enabled, IPC will be disabled 00:12:52.959 EAL: Bus pci wants IOVA as 'DC' 00:12:52.959 EAL: Buses did not request a specific IOVA mode. 00:12:52.959 EAL: IOMMU is available, selecting IOVA as VA mode. 00:12:52.959 EAL: Selected IOVA mode 'VA' 00:12:52.959 EAL: Probing VFIO support... 00:12:52.959 EAL: IOMMU type 1 (Type 1) is supported 00:12:52.959 EAL: IOMMU type 7 (sPAPR) is not supported 00:12:52.959 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:12:52.959 EAL: VFIO support initialized 00:12:52.959 EAL: Ask a virtual area of 0x2e000 bytes 00:12:52.959 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:12:52.959 EAL: Setting up physically contiguous memory... 00:12:52.959 EAL: Setting maximum number of open files to 524288 00:12:52.959 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:12:52.959 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:12:52.959 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:12:52.959 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:12:52.959 EAL: Ask a virtual area of 0x61000 bytes 00:12:52.959 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:12:52.959 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:12:52.959 EAL: Ask a virtual area of 0x400000000 bytes 00:12:52.959 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:12:52.959 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:12:52.959 EAL: Hugepages will be freed exactly as allocated. 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: TSC frequency is ~2300000 KHz 00:12:52.959 EAL: Main lcore 0 is ready (tid=7fbf8d333a00;cpuset=[0]) 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 0 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 2MB 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Mem event callback 'spdk:(nil)' registered 00:12:52.959 00:12:52.959 00:12:52.959 CUnit - A unit testing framework for C - Version 2.1-3 00:12:52.959 http://cunit.sourceforge.net/ 00:12:52.959 00:12:52.959 00:12:52.959 Suite: components_suite 00:12:52.959 Test: vtophys_malloc_test ...passed 00:12:52.959 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 4MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was shrunk by 4MB 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 6MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was shrunk by 6MB 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 10MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was shrunk by 10MB 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 18MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was shrunk by 18MB 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 34MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was shrunk by 34MB 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 66MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was shrunk by 66MB 00:12:52.959 EAL: Trying to obtain current memory policy. 00:12:52.959 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:52.959 EAL: Restoring previous memory policy: 4 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:52.959 EAL: request: mp_malloc_sync 00:12:52.959 EAL: No shared files mode enabled, IPC is disabled 00:12:52.959 EAL: Heap on socket 0 was expanded by 130MB 00:12:52.959 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.218 EAL: request: mp_malloc_sync 00:12:53.218 EAL: No shared files mode enabled, IPC is disabled 00:12:53.218 EAL: Heap on socket 0 was shrunk by 130MB 00:12:53.218 EAL: Trying to obtain current memory policy. 00:12:53.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:53.218 EAL: Restoring previous memory policy: 4 00:12:53.218 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.218 EAL: request: mp_malloc_sync 00:12:53.218 EAL: No shared files mode enabled, IPC is disabled 00:12:53.218 EAL: Heap on socket 0 was expanded by 258MB 00:12:53.218 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.218 EAL: request: mp_malloc_sync 00:12:53.218 EAL: No shared files mode enabled, IPC is disabled 00:12:53.218 EAL: Heap on socket 0 was shrunk by 258MB 00:12:53.218 EAL: Trying to obtain current memory policy. 00:12:53.218 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:53.476 EAL: Restoring previous memory policy: 4 00:12:53.476 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.476 EAL: request: mp_malloc_sync 00:12:53.476 EAL: No shared files mode enabled, IPC is disabled 00:12:53.476 EAL: Heap on socket 0 was expanded by 514MB 00:12:53.476 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.476 EAL: request: mp_malloc_sync 00:12:53.476 EAL: No shared files mode enabled, IPC is disabled 00:12:53.476 EAL: Heap on socket 0 was shrunk by 514MB 00:12:53.476 EAL: Trying to obtain current memory policy. 00:12:53.476 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:53.734 EAL: Restoring previous memory policy: 4 00:12:53.734 EAL: Calling mem event callback 'spdk:(nil)' 00:12:53.734 EAL: request: mp_malloc_sync 00:12:53.734 EAL: No shared files mode enabled, IPC is disabled 00:12:53.734 EAL: Heap on socket 0 was expanded by 1026MB 00:12:53.992 EAL: Calling mem event callback 'spdk:(nil)' 00:12:54.250 EAL: request: mp_malloc_sync 00:12:54.250 EAL: No shared files mode enabled, IPC is disabled 00:12:54.250 EAL: Heap on socket 0 was shrunk by 1026MB 00:12:54.250 passed 00:12:54.250 00:12:54.250 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.250 suites 1 1 n/a 0 0 00:12:54.250 tests 2 2 2 0 0 00:12:54.250 asserts 497 497 497 0 n/a 00:12:54.250 00:12:54.250 Elapsed time = 1.149 seconds 00:12:54.250 EAL: Calling mem event callback 'spdk:(nil)' 00:12:54.250 EAL: request: mp_malloc_sync 00:12:54.250 EAL: No shared files mode enabled, IPC is disabled 00:12:54.251 EAL: Heap on socket 0 was shrunk by 2MB 00:12:54.251 EAL: No shared files mode enabled, IPC is disabled 00:12:54.251 EAL: No shared files mode enabled, IPC is disabled 00:12:54.251 EAL: No shared files mode enabled, IPC is disabled 00:12:54.251 00:12:54.251 real 0m1.325s 00:12:54.251 user 0m0.749s 00:12:54.251 sys 0m0.549s 00:12:54.251 16:36:58 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.251 16:36:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:12:54.251 ************************************ 00:12:54.251 END TEST env_vtophys 00:12:54.251 ************************************ 00:12:54.251 16:36:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:12:54.251 16:36:58 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:54.251 16:36:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.251 16:36:58 env -- common/autotest_common.sh@10 -- # set +x 00:12:54.251 ************************************ 00:12:54.251 START TEST env_pci 00:12:54.251 ************************************ 00:12:54.251 16:36:58 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:12:54.251 00:12:54.251 00:12:54.251 CUnit - A unit testing framework for C - Version 2.1-3 00:12:54.251 http://cunit.sourceforge.net/ 00:12:54.251 00:12:54.251 00:12:54.251 Suite: pci 00:12:54.251 Test: pci_hook ...[2024-11-05 16:36:58.707658] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1118:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3508186 has claimed it 00:12:54.251 EAL: Cannot find device (10000:00:01.0) 00:12:54.251 EAL: Failed to attach device on primary process 00:12:54.251 passed 00:12:54.251 00:12:54.251 Run Summary: Type Total Ran Passed Failed Inactive 00:12:54.251 suites 1 1 n/a 0 0 00:12:54.251 tests 1 1 1 0 0 00:12:54.251 asserts 25 25 25 0 n/a 00:12:54.251 00:12:54.251 Elapsed time = 0.049 seconds 00:12:54.251 00:12:54.251 real 0m0.070s 00:12:54.251 user 0m0.019s 00:12:54.251 sys 0m0.051s 00:12:54.251 16:36:58 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:54.251 16:36:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:12:54.251 ************************************ 00:12:54.251 END TEST env_pci 00:12:54.251 ************************************ 00:12:54.251 16:36:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:12:54.251 16:36:58 env -- env/env.sh@15 -- # uname 00:12:54.251 16:36:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:12:54.251 16:36:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:12:54.251 16:36:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:54.251 16:36:58 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:54.251 16:36:58 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:54.251 16:36:58 env -- common/autotest_common.sh@10 -- # set +x 00:12:54.509 ************************************ 00:12:54.509 START TEST env_dpdk_post_init 00:12:54.509 ************************************ 00:12:54.509 16:36:58 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:54.509 EAL: Detected CPU lcores: 72 00:12:54.509 EAL: Detected NUMA nodes: 2 00:12:54.509 EAL: Detected static linkage of DPDK 00:12:54.509 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:54.509 EAL: Selected IOVA mode 'VA' 00:12:54.509 EAL: VFIO support initialized 00:12:54.509 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:54.509 EAL: Using IOMMU type 1 (Type 1) 00:12:55.447 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:13:00.715 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:13:00.715 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:13:00.973 Starting DPDK initialization... 00:13:00.974 Starting SPDK post initialization... 00:13:00.974 SPDK NVMe probe 00:13:00.974 Attaching to 0000:1a:00.0 00:13:00.974 Attached to 0000:1a:00.0 00:13:00.974 Cleaning up... 00:13:00.974 00:13:00.974 real 0m6.600s 00:13:00.974 user 0m4.744s 00:13:00.974 sys 0m1.104s 00:13:00.974 16:37:05 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:00.974 16:37:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:13:00.974 ************************************ 00:13:00.974 END TEST env_dpdk_post_init 00:13:00.974 ************************************ 00:13:00.974 16:37:05 env -- env/env.sh@26 -- # uname 00:13:00.974 16:37:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:13:00.974 16:37:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:13:00.974 16:37:05 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:00.974 16:37:05 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:00.974 16:37:05 env -- common/autotest_common.sh@10 -- # set +x 00:13:00.974 ************************************ 00:13:00.974 START TEST env_mem_callbacks 00:13:00.974 ************************************ 00:13:00.974 16:37:05 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:13:00.974 EAL: Detected CPU lcores: 72 00:13:00.974 EAL: Detected NUMA nodes: 2 00:13:00.974 EAL: Detected static linkage of DPDK 00:13:00.974 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:13:01.232 EAL: Selected IOVA mode 'VA' 00:13:01.232 EAL: VFIO support initialized 00:13:01.232 TELEMETRY: No legacy callbacks, legacy socket not created 00:13:01.232 00:13:01.232 00:13:01.232 CUnit - A unit testing framework for C - Version 2.1-3 00:13:01.232 http://cunit.sourceforge.net/ 00:13:01.232 00:13:01.232 00:13:01.232 Suite: memory 00:13:01.232 Test: test ... 00:13:01.232 register 0x200000200000 2097152 00:13:01.232 malloc 3145728 00:13:01.232 register 0x200000400000 4194304 00:13:01.232 buf 0x200000500000 len 3145728 PASSED 00:13:01.232 malloc 64 00:13:01.232 buf 0x2000004fff40 len 64 PASSED 00:13:01.232 malloc 4194304 00:13:01.232 register 0x200000800000 6291456 00:13:01.232 buf 0x200000a00000 len 4194304 PASSED 00:13:01.232 free 0x200000500000 3145728 00:13:01.232 free 0x2000004fff40 64 00:13:01.232 unregister 0x200000400000 4194304 PASSED 00:13:01.232 free 0x200000a00000 4194304 00:13:01.232 unregister 0x200000800000 6291456 PASSED 00:13:01.232 malloc 8388608 00:13:01.232 register 0x200000400000 10485760 00:13:01.232 buf 0x200000600000 len 8388608 PASSED 00:13:01.232 free 0x200000600000 8388608 00:13:01.232 unregister 0x200000400000 10485760 PASSED 00:13:01.232 passed 00:13:01.232 00:13:01.232 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.232 suites 1 1 n/a 0 0 00:13:01.232 tests 1 1 1 0 0 00:13:01.232 asserts 15 15 15 0 n/a 00:13:01.232 00:13:01.232 Elapsed time = 0.008 seconds 00:13:01.232 00:13:01.232 real 0m0.089s 00:13:01.232 user 0m0.017s 00:13:01.232 sys 0m0.072s 00:13:01.232 16:37:05 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.232 16:37:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:13:01.232 ************************************ 00:13:01.232 END TEST env_mem_callbacks 00:13:01.232 ************************************ 00:13:01.232 00:13:01.232 real 0m8.826s 00:13:01.232 user 0m5.914s 00:13:01.232 sys 0m2.182s 00:13:01.232 16:37:05 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.232 16:37:05 env -- common/autotest_common.sh@10 -- # set +x 00:13:01.232 ************************************ 00:13:01.232 END TEST env 00:13:01.232 ************************************ 00:13:01.232 16:37:05 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:13:01.232 16:37:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:01.232 16:37:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.232 16:37:05 -- common/autotest_common.sh@10 -- # set +x 00:13:01.232 ************************************ 00:13:01.232 START TEST rpc 00:13:01.232 ************************************ 00:13:01.232 16:37:05 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:13:01.491 * Looking for test storage... 00:13:01.491 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.491 16:37:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.491 16:37:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.491 16:37:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.491 16:37:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.491 16:37:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.491 16:37:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:01.491 16:37:05 rpc -- scripts/common.sh@345 -- # : 1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.491 16:37:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.491 16:37:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@353 -- # local d=1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.491 16:37:05 rpc -- scripts/common.sh@355 -- # echo 1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.491 16:37:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@353 -- # local d=2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.491 16:37:05 rpc -- scripts/common.sh@355 -- # echo 2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.491 16:37:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.491 16:37:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.491 16:37:05 rpc -- scripts/common.sh@368 -- # return 0 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.491 --rc genhtml_branch_coverage=1 00:13:01.491 --rc genhtml_function_coverage=1 00:13:01.491 --rc genhtml_legend=1 00:13:01.491 --rc geninfo_all_blocks=1 00:13:01.491 --rc geninfo_unexecuted_blocks=1 00:13:01.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:01.491 ' 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.491 --rc genhtml_branch_coverage=1 00:13:01.491 --rc genhtml_function_coverage=1 00:13:01.491 --rc genhtml_legend=1 00:13:01.491 --rc geninfo_all_blocks=1 00:13:01.491 --rc geninfo_unexecuted_blocks=1 00:13:01.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:01.491 ' 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.491 --rc genhtml_branch_coverage=1 00:13:01.491 --rc genhtml_function_coverage=1 00:13:01.491 --rc genhtml_legend=1 00:13:01.491 --rc geninfo_all_blocks=1 00:13:01.491 --rc geninfo_unexecuted_blocks=1 00:13:01.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:01.491 ' 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:01.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.491 --rc genhtml_branch_coverage=1 00:13:01.491 --rc genhtml_function_coverage=1 00:13:01.491 --rc genhtml_legend=1 00:13:01.491 --rc geninfo_all_blocks=1 00:13:01.491 --rc geninfo_unexecuted_blocks=1 00:13:01.491 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:01.491 ' 00:13:01.491 16:37:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3509345 00:13:01.491 16:37:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:01.491 16:37:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3509345 00:13:01.491 16:37:05 rpc -- common/autotest_common.sh@833 -- # '[' -z 3509345 ']' 00:13:01.492 16:37:05 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.492 16:37:05 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:01.492 16:37:05 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.492 16:37:05 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:01.492 16:37:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.492 16:37:05 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:13:01.492 [2024-11-05 16:37:05.962359] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:01.492 [2024-11-05 16:37:05.962435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509345 ] 00:13:01.751 [2024-11-05 16:37:06.084633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.751 [2024-11-05 16:37:06.139338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:13:01.751 [2024-11-05 16:37:06.139384] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3509345' to capture a snapshot of events at runtime. 00:13:01.751 [2024-11-05 16:37:06.139398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.751 [2024-11-05 16:37:06.139412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.751 [2024-11-05 16:37:06.139423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3509345 for offline analysis/debug. 00:13:01.751 [2024-11-05 16:37:06.140020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.010 16:37:06 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.010 16:37:06 rpc -- common/autotest_common.sh@866 -- # return 0 00:13:02.010 16:37:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:13:02.010 16:37:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:13:02.010 16:37:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:13:02.010 16:37:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:13:02.010 16:37:06 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:02.010 16:37:06 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:02.010 16:37:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 ************************************ 00:13:02.010 START TEST rpc_integrity 00:13:02.010 ************************************ 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.010 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.010 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:02.010 { 00:13:02.010 "name": "Malloc0", 00:13:02.010 "aliases": [ 00:13:02.010 "41cdb3f3-e129-44ab-9238-80c62aae895a" 00:13:02.010 ], 00:13:02.010 "product_name": "Malloc disk", 00:13:02.011 "block_size": 512, 00:13:02.011 "num_blocks": 16384, 00:13:02.011 "uuid": "41cdb3f3-e129-44ab-9238-80c62aae895a", 00:13:02.011 "assigned_rate_limits": { 00:13:02.011 "rw_ios_per_sec": 0, 00:13:02.011 "rw_mbytes_per_sec": 0, 00:13:02.011 "r_mbytes_per_sec": 0, 00:13:02.011 "w_mbytes_per_sec": 0 00:13:02.011 }, 00:13:02.011 "claimed": false, 00:13:02.011 "zoned": false, 00:13:02.011 "supported_io_types": { 00:13:02.011 "read": true, 00:13:02.011 "write": true, 00:13:02.011 "unmap": true, 00:13:02.011 "flush": true, 00:13:02.011 "reset": true, 00:13:02.011 "nvme_admin": false, 00:13:02.011 "nvme_io": false, 00:13:02.011 "nvme_io_md": false, 00:13:02.011 "write_zeroes": true, 00:13:02.011 "zcopy": true, 00:13:02.011 "get_zone_info": false, 00:13:02.011 "zone_management": false, 00:13:02.011 "zone_append": false, 00:13:02.011 "compare": false, 00:13:02.011 "compare_and_write": false, 00:13:02.011 "abort": true, 00:13:02.011 "seek_hole": false, 00:13:02.011 "seek_data": false, 00:13:02.011 "copy": true, 00:13:02.011 "nvme_iov_md": false 00:13:02.011 }, 00:13:02.011 "memory_domains": [ 00:13:02.011 { 00:13:02.011 "dma_device_id": "system", 00:13:02.011 "dma_device_type": 1 00:13:02.011 }, 00:13:02.011 { 00:13:02.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.011 "dma_device_type": 2 00:13:02.011 } 00:13:02.011 ], 00:13:02.011 "driver_specific": {} 00:13:02.011 } 00:13:02.011 ]' 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.011 [2024-11-05 16:37:06.515967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:13:02.011 [2024-11-05 16:37:06.516007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.011 [2024-11-05 16:37:06.516028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x603ad10 00:13:02.011 [2024-11-05 16:37:06.516042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.011 [2024-11-05 16:37:06.517293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.011 [2024-11-05 16:37:06.517322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:02.011 Passthru0 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:02.011 { 00:13:02.011 "name": "Malloc0", 00:13:02.011 "aliases": [ 00:13:02.011 "41cdb3f3-e129-44ab-9238-80c62aae895a" 00:13:02.011 ], 00:13:02.011 "product_name": "Malloc disk", 00:13:02.011 "block_size": 512, 00:13:02.011 "num_blocks": 16384, 00:13:02.011 "uuid": "41cdb3f3-e129-44ab-9238-80c62aae895a", 00:13:02.011 "assigned_rate_limits": { 00:13:02.011 "rw_ios_per_sec": 0, 00:13:02.011 "rw_mbytes_per_sec": 0, 00:13:02.011 "r_mbytes_per_sec": 0, 00:13:02.011 "w_mbytes_per_sec": 0 00:13:02.011 }, 00:13:02.011 "claimed": true, 00:13:02.011 "claim_type": "exclusive_write", 00:13:02.011 "zoned": false, 00:13:02.011 "supported_io_types": { 00:13:02.011 "read": true, 00:13:02.011 "write": true, 00:13:02.011 "unmap": true, 00:13:02.011 "flush": true, 00:13:02.011 "reset": true, 00:13:02.011 "nvme_admin": false, 00:13:02.011 "nvme_io": false, 00:13:02.011 "nvme_io_md": false, 00:13:02.011 "write_zeroes": true, 00:13:02.011 "zcopy": true, 00:13:02.011 "get_zone_info": false, 00:13:02.011 "zone_management": false, 00:13:02.011 "zone_append": false, 00:13:02.011 "compare": false, 00:13:02.011 "compare_and_write": false, 00:13:02.011 "abort": true, 00:13:02.011 "seek_hole": false, 00:13:02.011 "seek_data": false, 00:13:02.011 "copy": true, 00:13:02.011 "nvme_iov_md": false 00:13:02.011 }, 00:13:02.011 "memory_domains": [ 00:13:02.011 { 00:13:02.011 "dma_device_id": "system", 00:13:02.011 "dma_device_type": 1 00:13:02.011 }, 00:13:02.011 { 00:13:02.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.011 "dma_device_type": 2 00:13:02.011 } 00:13:02.011 ], 00:13:02.011 "driver_specific": {} 00:13:02.011 }, 00:13:02.011 { 00:13:02.011 "name": "Passthru0", 00:13:02.011 "aliases": [ 00:13:02.011 "fe96d0a4-e933-595c-9da3-12e2c27a9e41" 00:13:02.011 ], 00:13:02.011 "product_name": "passthru", 00:13:02.011 "block_size": 512, 00:13:02.011 "num_blocks": 16384, 00:13:02.011 "uuid": "fe96d0a4-e933-595c-9da3-12e2c27a9e41", 00:13:02.011 "assigned_rate_limits": { 00:13:02.011 "rw_ios_per_sec": 0, 00:13:02.011 "rw_mbytes_per_sec": 0, 00:13:02.011 "r_mbytes_per_sec": 0, 00:13:02.011 "w_mbytes_per_sec": 0 00:13:02.011 }, 00:13:02.011 "claimed": false, 00:13:02.011 "zoned": false, 00:13:02.011 "supported_io_types": { 00:13:02.011 "read": true, 00:13:02.011 "write": true, 00:13:02.011 "unmap": true, 00:13:02.011 "flush": true, 00:13:02.011 "reset": true, 00:13:02.011 "nvme_admin": false, 00:13:02.011 "nvme_io": false, 00:13:02.011 "nvme_io_md": false, 00:13:02.011 "write_zeroes": true, 00:13:02.011 "zcopy": true, 00:13:02.011 "get_zone_info": false, 00:13:02.011 "zone_management": false, 00:13:02.011 "zone_append": false, 00:13:02.011 "compare": false, 00:13:02.011 "compare_and_write": false, 00:13:02.011 "abort": true, 00:13:02.011 "seek_hole": false, 00:13:02.011 "seek_data": false, 00:13:02.011 "copy": true, 00:13:02.011 "nvme_iov_md": false 00:13:02.011 }, 00:13:02.011 "memory_domains": [ 00:13:02.011 { 00:13:02.011 "dma_device_id": "system", 00:13:02.011 "dma_device_type": 1 00:13:02.011 }, 00:13:02.011 { 00:13:02.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.011 "dma_device_type": 2 00:13:02.011 } 00:13:02.011 ], 00:13:02.011 "driver_specific": { 00:13:02.011 "passthru": { 00:13:02.011 "name": "Passthru0", 00:13:02.011 "base_bdev_name": "Malloc0" 00:13:02.011 } 00:13:02.011 } 00:13:02.011 } 00:13:02.011 ]' 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.011 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.011 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.270 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:02.270 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.270 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.270 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:02.270 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:02.270 16:37:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:02.270 00:13:02.270 real 0m0.241s 00:13:02.270 user 0m0.146s 00:13:02.270 sys 0m0.028s 00:13:02.270 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:02.270 16:37:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 ************************************ 00:13:02.270 END TEST rpc_integrity 00:13:02.270 ************************************ 00:13:02.270 16:37:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:13:02.270 16:37:06 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:02.270 16:37:06 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:02.270 16:37:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 ************************************ 00:13:02.270 START TEST rpc_plugins 00:13:02.270 ************************************ 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:13:02.270 { 00:13:02.270 "name": "Malloc1", 00:13:02.270 "aliases": [ 00:13:02.270 "db90c8b4-e3f0-4aa1-be01-3c15324d45fd" 00:13:02.270 ], 00:13:02.270 "product_name": "Malloc disk", 00:13:02.270 "block_size": 4096, 00:13:02.270 "num_blocks": 256, 00:13:02.270 "uuid": "db90c8b4-e3f0-4aa1-be01-3c15324d45fd", 00:13:02.270 "assigned_rate_limits": { 00:13:02.270 "rw_ios_per_sec": 0, 00:13:02.270 "rw_mbytes_per_sec": 0, 00:13:02.270 "r_mbytes_per_sec": 0, 00:13:02.270 "w_mbytes_per_sec": 0 00:13:02.270 }, 00:13:02.270 "claimed": false, 00:13:02.270 "zoned": false, 00:13:02.270 "supported_io_types": { 00:13:02.270 "read": true, 00:13:02.270 "write": true, 00:13:02.270 "unmap": true, 00:13:02.270 "flush": true, 00:13:02.270 "reset": true, 00:13:02.270 "nvme_admin": false, 00:13:02.270 "nvme_io": false, 00:13:02.270 "nvme_io_md": false, 00:13:02.270 "write_zeroes": true, 00:13:02.270 "zcopy": true, 00:13:02.270 "get_zone_info": false, 00:13:02.270 "zone_management": false, 00:13:02.270 "zone_append": false, 00:13:02.270 "compare": false, 00:13:02.270 "compare_and_write": false, 00:13:02.270 "abort": true, 00:13:02.270 "seek_hole": false, 00:13:02.270 "seek_data": false, 00:13:02.270 "copy": true, 00:13:02.270 "nvme_iov_md": false 00:13:02.270 }, 00:13:02.270 "memory_domains": [ 00:13:02.270 { 00:13:02.270 "dma_device_id": "system", 00:13:02.270 "dma_device_type": 1 00:13:02.270 }, 00:13:02.270 { 00:13:02.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.270 "dma_device_type": 2 00:13:02.270 } 00:13:02.270 ], 00:13:02.270 "driver_specific": {} 00:13:02.270 } 00:13:02.270 ]' 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.270 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.270 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:02.271 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.271 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:13:02.271 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:13:02.529 16:37:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:13:02.529 00:13:02.529 real 0m0.144s 00:13:02.529 user 0m0.083s 00:13:02.529 sys 0m0.019s 00:13:02.529 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:02.529 16:37:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:13:02.529 ************************************ 00:13:02.529 END TEST rpc_plugins 00:13:02.529 ************************************ 00:13:02.529 16:37:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:13:02.529 16:37:06 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:02.529 16:37:06 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:02.529 16:37:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.529 ************************************ 00:13:02.529 START TEST rpc_trace_cmd_test 00:13:02.529 ************************************ 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.529 16:37:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:13:02.529 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3509345", 00:13:02.529 "tpoint_group_mask": "0x8", 00:13:02.529 "iscsi_conn": { 00:13:02.529 "mask": "0x2", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "scsi": { 00:13:02.529 "mask": "0x4", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "bdev": { 00:13:02.529 "mask": "0x8", 00:13:02.529 "tpoint_mask": "0xffffffffffffffff" 00:13:02.529 }, 00:13:02.529 "nvmf_rdma": { 00:13:02.529 "mask": "0x10", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "nvmf_tcp": { 00:13:02.529 "mask": "0x20", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "ftl": { 00:13:02.529 "mask": "0x40", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "blobfs": { 00:13:02.529 "mask": "0x80", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "dsa": { 00:13:02.529 "mask": "0x200", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "thread": { 00:13:02.529 "mask": "0x400", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "nvme_pcie": { 00:13:02.529 "mask": "0x800", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "iaa": { 00:13:02.529 "mask": "0x1000", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "nvme_tcp": { 00:13:02.529 "mask": "0x2000", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.529 "bdev_nvme": { 00:13:02.529 "mask": "0x4000", 00:13:02.529 "tpoint_mask": "0x0" 00:13:02.529 }, 00:13:02.530 "sock": { 00:13:02.530 "mask": "0x8000", 00:13:02.530 "tpoint_mask": "0x0" 00:13:02.530 }, 00:13:02.530 "blob": { 00:13:02.530 "mask": "0x10000", 00:13:02.530 "tpoint_mask": "0x0" 00:13:02.530 }, 00:13:02.530 "bdev_raid": { 00:13:02.530 "mask": "0x20000", 00:13:02.530 "tpoint_mask": "0x0" 00:13:02.530 }, 00:13:02.530 "scheduler": { 00:13:02.530 "mask": "0x40000", 00:13:02.530 "tpoint_mask": "0x0" 00:13:02.530 } 00:13:02.530 }' 00:13:02.530 16:37:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:13:02.530 16:37:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:13:02.530 16:37:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:13:02.530 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:13:02.530 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:13:02.530 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:13:02.530 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:13:02.789 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:13:02.789 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:13:02.789 16:37:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:13:02.789 00:13:02.789 real 0m0.236s 00:13:02.789 user 0m0.190s 00:13:02.789 sys 0m0.037s 00:13:02.789 16:37:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:02.789 16:37:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.789 ************************************ 00:13:02.789 END TEST rpc_trace_cmd_test 00:13:02.789 ************************************ 00:13:02.789 16:37:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:13:02.789 16:37:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:13:02.789 16:37:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:13:02.789 16:37:07 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:02.789 16:37:07 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:02.789 16:37:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.789 ************************************ 00:13:02.789 START TEST rpc_daemon_integrity 00:13:02.789 ************************************ 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.789 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:13:02.789 { 00:13:02.789 "name": "Malloc2", 00:13:02.789 "aliases": [ 00:13:02.789 "c9dd714b-def9-4059-b73b-40ab025a0744" 00:13:02.789 ], 00:13:02.789 "product_name": "Malloc disk", 00:13:02.789 "block_size": 512, 00:13:02.789 "num_blocks": 16384, 00:13:02.789 "uuid": "c9dd714b-def9-4059-b73b-40ab025a0744", 00:13:02.789 "assigned_rate_limits": { 00:13:02.789 "rw_ios_per_sec": 0, 00:13:02.789 "rw_mbytes_per_sec": 0, 00:13:02.789 "r_mbytes_per_sec": 0, 00:13:02.789 "w_mbytes_per_sec": 0 00:13:02.789 }, 00:13:02.789 "claimed": false, 00:13:02.789 "zoned": false, 00:13:02.789 "supported_io_types": { 00:13:02.789 "read": true, 00:13:02.789 "write": true, 00:13:02.789 "unmap": true, 00:13:02.789 "flush": true, 00:13:02.789 "reset": true, 00:13:02.789 "nvme_admin": false, 00:13:02.789 "nvme_io": false, 00:13:02.789 "nvme_io_md": false, 00:13:02.789 "write_zeroes": true, 00:13:02.789 "zcopy": true, 00:13:02.789 "get_zone_info": false, 00:13:02.789 "zone_management": false, 00:13:02.789 "zone_append": false, 00:13:02.789 "compare": false, 00:13:02.789 "compare_and_write": false, 00:13:02.789 "abort": true, 00:13:02.789 "seek_hole": false, 00:13:02.790 "seek_data": false, 00:13:02.790 "copy": true, 00:13:02.790 "nvme_iov_md": false 00:13:02.790 }, 00:13:02.790 "memory_domains": [ 00:13:02.790 { 00:13:02.790 "dma_device_id": "system", 00:13:02.790 "dma_device_type": 1 00:13:02.790 }, 00:13:02.790 { 00:13:02.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.790 "dma_device_type": 2 00:13:02.790 } 00:13:02.790 ], 00:13:02.790 "driver_specific": {} 00:13:02.790 } 00:13:02.790 ]' 00:13:02.790 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:13:02.790 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:13:02.790 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:13:02.790 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.790 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:02.790 [2024-11-05 16:37:07.374313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:13:02.790 [2024-11-05 16:37:07.374353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.790 [2024-11-05 16:37:07.374374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x615c1d0 00:13:02.790 [2024-11-05 16:37:07.374387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.049 [2024-11-05 16:37:07.375615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.049 [2024-11-05 16:37:07.375644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:13:03.049 Passthru0 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:13:03.049 { 00:13:03.049 "name": "Malloc2", 00:13:03.049 "aliases": [ 00:13:03.049 "c9dd714b-def9-4059-b73b-40ab025a0744" 00:13:03.049 ], 00:13:03.049 "product_name": "Malloc disk", 00:13:03.049 "block_size": 512, 00:13:03.049 "num_blocks": 16384, 00:13:03.049 "uuid": "c9dd714b-def9-4059-b73b-40ab025a0744", 00:13:03.049 "assigned_rate_limits": { 00:13:03.049 "rw_ios_per_sec": 0, 00:13:03.049 "rw_mbytes_per_sec": 0, 00:13:03.049 "r_mbytes_per_sec": 0, 00:13:03.049 "w_mbytes_per_sec": 0 00:13:03.049 }, 00:13:03.049 "claimed": true, 00:13:03.049 "claim_type": "exclusive_write", 00:13:03.049 "zoned": false, 00:13:03.049 "supported_io_types": { 00:13:03.049 "read": true, 00:13:03.049 "write": true, 00:13:03.049 "unmap": true, 00:13:03.049 "flush": true, 00:13:03.049 "reset": true, 00:13:03.049 "nvme_admin": false, 00:13:03.049 "nvme_io": false, 00:13:03.049 "nvme_io_md": false, 00:13:03.049 "write_zeroes": true, 00:13:03.049 "zcopy": true, 00:13:03.049 "get_zone_info": false, 00:13:03.049 "zone_management": false, 00:13:03.049 "zone_append": false, 00:13:03.049 "compare": false, 00:13:03.049 "compare_and_write": false, 00:13:03.049 "abort": true, 00:13:03.049 "seek_hole": false, 00:13:03.049 "seek_data": false, 00:13:03.049 "copy": true, 00:13:03.049 "nvme_iov_md": false 00:13:03.049 }, 00:13:03.049 "memory_domains": [ 00:13:03.049 { 00:13:03.049 "dma_device_id": "system", 00:13:03.049 "dma_device_type": 1 00:13:03.049 }, 00:13:03.049 { 00:13:03.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.049 "dma_device_type": 2 00:13:03.049 } 00:13:03.049 ], 00:13:03.049 "driver_specific": {} 00:13:03.049 }, 00:13:03.049 { 00:13:03.049 "name": "Passthru0", 00:13:03.049 "aliases": [ 00:13:03.049 "775e6697-30d4-5c30-8233-88e173d59940" 00:13:03.049 ], 00:13:03.049 "product_name": "passthru", 00:13:03.049 "block_size": 512, 00:13:03.049 "num_blocks": 16384, 00:13:03.049 "uuid": "775e6697-30d4-5c30-8233-88e173d59940", 00:13:03.049 "assigned_rate_limits": { 00:13:03.049 "rw_ios_per_sec": 0, 00:13:03.049 "rw_mbytes_per_sec": 0, 00:13:03.049 "r_mbytes_per_sec": 0, 00:13:03.049 "w_mbytes_per_sec": 0 00:13:03.049 }, 00:13:03.049 "claimed": false, 00:13:03.049 "zoned": false, 00:13:03.049 "supported_io_types": { 00:13:03.049 "read": true, 00:13:03.049 "write": true, 00:13:03.049 "unmap": true, 00:13:03.049 "flush": true, 00:13:03.049 "reset": true, 00:13:03.049 "nvme_admin": false, 00:13:03.049 "nvme_io": false, 00:13:03.049 "nvme_io_md": false, 00:13:03.049 "write_zeroes": true, 00:13:03.049 "zcopy": true, 00:13:03.049 "get_zone_info": false, 00:13:03.049 "zone_management": false, 00:13:03.049 "zone_append": false, 00:13:03.049 "compare": false, 00:13:03.049 "compare_and_write": false, 00:13:03.049 "abort": true, 00:13:03.049 "seek_hole": false, 00:13:03.049 "seek_data": false, 00:13:03.049 "copy": true, 00:13:03.049 "nvme_iov_md": false 00:13:03.049 }, 00:13:03.049 "memory_domains": [ 00:13:03.049 { 00:13:03.049 "dma_device_id": "system", 00:13:03.049 "dma_device_type": 1 00:13:03.049 }, 00:13:03.049 { 00:13:03.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.049 "dma_device_type": 2 00:13:03.049 } 00:13:03.049 ], 00:13:03.049 "driver_specific": { 00:13:03.049 "passthru": { 00:13:03.049 "name": "Passthru0", 00:13:03.049 "base_bdev_name": "Malloc2" 00:13:03.049 } 00:13:03.049 } 00:13:03.049 } 00:13:03.049 ]' 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:13:03.049 00:13:03.049 real 0m0.270s 00:13:03.049 user 0m0.170s 00:13:03.049 sys 0m0.033s 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.049 16:37:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:13:03.049 ************************************ 00:13:03.049 END TEST rpc_daemon_integrity 00:13:03.049 ************************************ 00:13:03.049 16:37:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:13:03.049 16:37:07 rpc -- rpc/rpc.sh@84 -- # killprocess 3509345 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@952 -- # '[' -z 3509345 ']' 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@956 -- # kill -0 3509345 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@957 -- # uname 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3509345 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:03.049 16:37:07 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3509345' 00:13:03.050 killing process with pid 3509345 00:13:03.050 16:37:07 rpc -- common/autotest_common.sh@971 -- # kill 3509345 00:13:03.050 16:37:07 rpc -- common/autotest_common.sh@976 -- # wait 3509345 00:13:03.618 00:13:03.618 real 0m2.236s 00:13:03.618 user 0m2.732s 00:13:03.618 sys 0m0.834s 00:13:03.618 16:37:07 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.618 16:37:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.618 ************************************ 00:13:03.618 END TEST rpc 00:13:03.618 ************************************ 00:13:03.618 16:37:08 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:13:03.618 16:37:08 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:03.618 16:37:08 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.618 16:37:08 -- common/autotest_common.sh@10 -- # set +x 00:13:03.618 ************************************ 00:13:03.618 START TEST skip_rpc 00:13:03.618 ************************************ 00:13:03.618 16:37:08 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:13:03.618 * Looking for test storage... 00:13:03.618 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:13:03.618 16:37:08 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:03.618 16:37:08 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:03.618 16:37:08 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:03.877 16:37:08 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:13:03.877 16:37:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.878 16:37:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.878 16:37:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.878 16:37:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.878 --rc genhtml_branch_coverage=1 00:13:03.878 --rc genhtml_function_coverage=1 00:13:03.878 --rc genhtml_legend=1 00:13:03.878 --rc geninfo_all_blocks=1 00:13:03.878 --rc geninfo_unexecuted_blocks=1 00:13:03.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:03.878 ' 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.878 --rc genhtml_branch_coverage=1 00:13:03.878 --rc genhtml_function_coverage=1 00:13:03.878 --rc genhtml_legend=1 00:13:03.878 --rc geninfo_all_blocks=1 00:13:03.878 --rc geninfo_unexecuted_blocks=1 00:13:03.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:03.878 ' 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.878 --rc genhtml_branch_coverage=1 00:13:03.878 --rc genhtml_function_coverage=1 00:13:03.878 --rc genhtml_legend=1 00:13:03.878 --rc geninfo_all_blocks=1 00:13:03.878 --rc geninfo_unexecuted_blocks=1 00:13:03.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:03.878 ' 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:03.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.878 --rc genhtml_branch_coverage=1 00:13:03.878 --rc genhtml_function_coverage=1 00:13:03.878 --rc genhtml_legend=1 00:13:03.878 --rc geninfo_all_blocks=1 00:13:03.878 --rc geninfo_unexecuted_blocks=1 00:13:03.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:03.878 ' 00:13:03.878 16:37:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:13:03.878 16:37:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:13:03.878 16:37:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.878 16:37:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.878 ************************************ 00:13:03.878 START TEST skip_rpc 00:13:03.878 ************************************ 00:13:03.878 16:37:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:13:03.878 16:37:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3509795 00:13:03.878 16:37:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:03.878 16:37:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:13:03.878 16:37:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:13:03.878 [2024-11-05 16:37:08.288918] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:03.878 [2024-11-05 16:37:08.288995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509795 ] 00:13:03.878 [2024-11-05 16:37:08.413089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.137 [2024-11-05 16:37:08.469361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3509795 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 3509795 ']' 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 3509795 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3509795 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3509795' 00:13:09.438 killing process with pid 3509795 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 3509795 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 3509795 00:13:09.438 00:13:09.438 real 0m5.437s 00:13:09.438 user 0m5.095s 00:13:09.438 sys 0m0.384s 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:09.438 16:37:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.438 ************************************ 00:13:09.438 END TEST skip_rpc 00:13:09.438 ************************************ 00:13:09.438 16:37:13 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:09.438 16:37:13 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:09.438 16:37:13 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:09.438 16:37:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.438 ************************************ 00:13:09.438 START TEST skip_rpc_with_json 00:13:09.438 ************************************ 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3510576 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3510576 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 3510576 ']' 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:09.438 16:37:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:09.438 [2024-11-05 16:37:13.805614] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:09.439 [2024-11-05 16:37:13.805690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510576 ] 00:13:09.439 [2024-11-05 16:37:13.929728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.439 [2024-11-05 16:37:13.986969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:09.697 [2024-11-05 16:37:14.245670] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:09.697 request: 00:13:09.697 { 00:13:09.697 "trtype": "tcp", 00:13:09.697 "method": "nvmf_get_transports", 00:13:09.697 "req_id": 1 00:13:09.697 } 00:13:09.697 Got JSON-RPC error response 00:13:09.697 response: 00:13:09.697 { 00:13:09.697 "code": -19, 00:13:09.697 "message": "No such device" 00:13:09.697 } 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:09.697 [2024-11-05 16:37:14.257799] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.697 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:09.957 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.957 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:13:09.957 { 00:13:09.957 "subsystems": [ 00:13:09.957 { 00:13:09.957 "subsystem": "scheduler", 00:13:09.957 "config": [ 00:13:09.957 { 00:13:09.957 "method": "framework_set_scheduler", 00:13:09.957 "params": { 00:13:09.957 "name": "static" 00:13:09.957 } 00:13:09.957 } 00:13:09.957 ] 00:13:09.957 }, 00:13:09.957 { 00:13:09.957 "subsystem": "vmd", 00:13:09.957 "config": [] 00:13:09.957 }, 00:13:09.957 { 00:13:09.957 "subsystem": "sock", 00:13:09.957 "config": [ 00:13:09.957 { 00:13:09.957 "method": "sock_set_default_impl", 00:13:09.957 "params": { 00:13:09.957 "impl_name": "posix" 00:13:09.957 } 00:13:09.957 }, 00:13:09.957 { 00:13:09.957 "method": "sock_impl_set_options", 00:13:09.957 "params": { 00:13:09.957 "impl_name": "ssl", 00:13:09.957 "recv_buf_size": 4096, 00:13:09.957 "send_buf_size": 4096, 00:13:09.957 "enable_recv_pipe": true, 00:13:09.957 "enable_quickack": false, 00:13:09.957 "enable_placement_id": 0, 00:13:09.957 "enable_zerocopy_send_server": true, 00:13:09.957 "enable_zerocopy_send_client": false, 00:13:09.957 "zerocopy_threshold": 0, 00:13:09.957 "tls_version": 0, 00:13:09.957 "enable_ktls": false 00:13:09.957 } 00:13:09.957 }, 00:13:09.957 { 00:13:09.957 "method": "sock_impl_set_options", 00:13:09.957 "params": { 00:13:09.957 "impl_name": "posix", 00:13:09.957 "recv_buf_size": 2097152, 00:13:09.957 "send_buf_size": 2097152, 00:13:09.957 "enable_recv_pipe": true, 00:13:09.958 "enable_quickack": false, 00:13:09.958 "enable_placement_id": 0, 00:13:09.958 "enable_zerocopy_send_server": true, 00:13:09.958 "enable_zerocopy_send_client": false, 00:13:09.958 "zerocopy_threshold": 0, 00:13:09.958 "tls_version": 0, 00:13:09.958 "enable_ktls": false 00:13:09.958 } 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "iobuf", 00:13:09.958 "config": [ 00:13:09.958 { 00:13:09.958 "method": "iobuf_set_options", 00:13:09.958 "params": { 00:13:09.958 "small_pool_count": 8192, 00:13:09.958 "large_pool_count": 1024, 00:13:09.958 "small_bufsize": 8192, 00:13:09.958 "large_bufsize": 135168, 00:13:09.958 "enable_numa": false 00:13:09.958 } 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "keyring", 00:13:09.958 "config": [] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "vfio_user_target", 00:13:09.958 "config": null 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "fsdev", 00:13:09.958 "config": [ 00:13:09.958 { 00:13:09.958 "method": "fsdev_set_opts", 00:13:09.958 "params": { 00:13:09.958 "fsdev_io_pool_size": 65535, 00:13:09.958 "fsdev_io_cache_size": 256 00:13:09.958 } 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "accel", 00:13:09.958 "config": [ 00:13:09.958 { 00:13:09.958 "method": "accel_set_options", 00:13:09.958 "params": { 00:13:09.958 "small_cache_size": 128, 00:13:09.958 "large_cache_size": 16, 00:13:09.958 "task_count": 2048, 00:13:09.958 "sequence_count": 2048, 00:13:09.958 "buf_count": 2048 00:13:09.958 } 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "bdev", 00:13:09.958 "config": [ 00:13:09.958 { 00:13:09.958 "method": "bdev_set_options", 00:13:09.958 "params": { 00:13:09.958 "bdev_io_pool_size": 65535, 00:13:09.958 "bdev_io_cache_size": 256, 00:13:09.958 "bdev_auto_examine": true, 00:13:09.958 "iobuf_small_cache_size": 128, 00:13:09.958 "iobuf_large_cache_size": 16 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "bdev_raid_set_options", 00:13:09.958 "params": { 00:13:09.958 "process_window_size_kb": 1024, 00:13:09.958 "process_max_bandwidth_mb_sec": 0 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "bdev_nvme_set_options", 00:13:09.958 "params": { 00:13:09.958 "action_on_timeout": "none", 00:13:09.958 "timeout_us": 0, 00:13:09.958 "timeout_admin_us": 0, 00:13:09.958 "keep_alive_timeout_ms": 10000, 00:13:09.958 "arbitration_burst": 0, 00:13:09.958 "low_priority_weight": 0, 00:13:09.958 "medium_priority_weight": 0, 00:13:09.958 "high_priority_weight": 0, 00:13:09.958 "nvme_adminq_poll_period_us": 10000, 00:13:09.958 "nvme_ioq_poll_period_us": 0, 00:13:09.958 "io_queue_requests": 0, 00:13:09.958 "delay_cmd_submit": true, 00:13:09.958 "transport_retry_count": 4, 00:13:09.958 "bdev_retry_count": 3, 00:13:09.958 "transport_ack_timeout": 0, 00:13:09.958 "ctrlr_loss_timeout_sec": 0, 00:13:09.958 "reconnect_delay_sec": 0, 00:13:09.958 "fast_io_fail_timeout_sec": 0, 00:13:09.958 "disable_auto_failback": false, 00:13:09.958 "generate_uuids": false, 00:13:09.958 "transport_tos": 0, 00:13:09.958 "nvme_error_stat": false, 00:13:09.958 "rdma_srq_size": 0, 00:13:09.958 "io_path_stat": false, 00:13:09.958 "allow_accel_sequence": false, 00:13:09.958 "rdma_max_cq_size": 0, 00:13:09.958 "rdma_cm_event_timeout_ms": 0, 00:13:09.958 "dhchap_digests": [ 00:13:09.958 "sha256", 00:13:09.958 "sha384", 00:13:09.958 "sha512" 00:13:09.958 ], 00:13:09.958 "dhchap_dhgroups": [ 00:13:09.958 "null", 00:13:09.958 "ffdhe2048", 00:13:09.958 "ffdhe3072", 00:13:09.958 "ffdhe4096", 00:13:09.958 "ffdhe6144", 00:13:09.958 "ffdhe8192" 00:13:09.958 ] 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "bdev_nvme_set_hotplug", 00:13:09.958 "params": { 00:13:09.958 "period_us": 100000, 00:13:09.958 "enable": false 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "bdev_iscsi_set_options", 00:13:09.958 "params": { 00:13:09.958 "timeout_sec": 30 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "bdev_wait_for_examine" 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "nvmf", 00:13:09.958 "config": [ 00:13:09.958 { 00:13:09.958 "method": "nvmf_set_config", 00:13:09.958 "params": { 00:13:09.958 "discovery_filter": "match_any", 00:13:09.958 "admin_cmd_passthru": { 00:13:09.958 "identify_ctrlr": false 00:13:09.958 }, 00:13:09.958 "dhchap_digests": [ 00:13:09.958 "sha256", 00:13:09.958 "sha384", 00:13:09.958 "sha512" 00:13:09.958 ], 00:13:09.958 "dhchap_dhgroups": [ 00:13:09.958 "null", 00:13:09.958 "ffdhe2048", 00:13:09.958 "ffdhe3072", 00:13:09.958 "ffdhe4096", 00:13:09.958 "ffdhe6144", 00:13:09.958 "ffdhe8192" 00:13:09.958 ] 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "nvmf_set_max_subsystems", 00:13:09.958 "params": { 00:13:09.958 "max_subsystems": 1024 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "nvmf_set_crdt", 00:13:09.958 "params": { 00:13:09.958 "crdt1": 0, 00:13:09.958 "crdt2": 0, 00:13:09.958 "crdt3": 0 00:13:09.958 } 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "method": "nvmf_create_transport", 00:13:09.958 "params": { 00:13:09.958 "trtype": "TCP", 00:13:09.958 "max_queue_depth": 128, 00:13:09.958 "max_io_qpairs_per_ctrlr": 127, 00:13:09.958 "in_capsule_data_size": 4096, 00:13:09.958 "max_io_size": 131072, 00:13:09.958 "io_unit_size": 131072, 00:13:09.958 "max_aq_depth": 128, 00:13:09.958 "num_shared_buffers": 511, 00:13:09.958 "buf_cache_size": 4294967295, 00:13:09.958 "dif_insert_or_strip": false, 00:13:09.958 "zcopy": false, 00:13:09.958 "c2h_success": true, 00:13:09.958 "sock_priority": 0, 00:13:09.958 "abort_timeout_sec": 1, 00:13:09.958 "ack_timeout": 0, 00:13:09.958 "data_wr_pool_size": 0 00:13:09.958 } 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "nbd", 00:13:09.958 "config": [] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "ublk", 00:13:09.958 "config": [] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "vhost_blk", 00:13:09.958 "config": [] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "scsi", 00:13:09.958 "config": null 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "iscsi", 00:13:09.958 "config": [ 00:13:09.958 { 00:13:09.958 "method": "iscsi_set_options", 00:13:09.958 "params": { 00:13:09.958 "node_base": "iqn.2016-06.io.spdk", 00:13:09.958 "max_sessions": 128, 00:13:09.958 "max_connections_per_session": 2, 00:13:09.958 "max_queue_depth": 64, 00:13:09.958 "default_time2wait": 2, 00:13:09.958 "default_time2retain": 20, 00:13:09.958 "first_burst_length": 8192, 00:13:09.958 "immediate_data": true, 00:13:09.958 "allow_duplicated_isid": false, 00:13:09.958 "error_recovery_level": 0, 00:13:09.958 "nop_timeout": 60, 00:13:09.958 "nop_in_interval": 30, 00:13:09.958 "disable_chap": false, 00:13:09.958 "require_chap": false, 00:13:09.958 "mutual_chap": false, 00:13:09.958 "chap_group": 0, 00:13:09.958 "max_large_datain_per_connection": 64, 00:13:09.958 "max_r2t_per_connection": 4, 00:13:09.958 "pdu_pool_size": 36864, 00:13:09.958 "immediate_data_pool_size": 16384, 00:13:09.958 "data_out_pool_size": 2048 00:13:09.958 } 00:13:09.958 } 00:13:09.958 ] 00:13:09.958 }, 00:13:09.958 { 00:13:09.958 "subsystem": "vhost_scsi", 00:13:09.959 "config": [] 00:13:09.959 } 00:13:09.959 ] 00:13:09.959 } 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3510576 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3510576 ']' 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3510576 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3510576 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3510576' 00:13:09.959 killing process with pid 3510576 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3510576 00:13:09.959 16:37:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3510576 00:13:10.527 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3510629 00:13:10.527 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:10.527 16:37:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3510629 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 3510629 ']' 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 3510629 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3510629 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3510629' 00:13:15.799 killing process with pid 3510629 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 3510629 00:13:15.799 16:37:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 3510629 00:13:15.799 16:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:13:15.799 16:37:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:13:15.799 00:13:15.799 real 0m6.523s 00:13:15.799 user 0m6.145s 00:13:15.799 sys 0m0.794s 00:13:15.799 16:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.799 16:37:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:15.799 ************************************ 00:13:15.799 END TEST skip_rpc_with_json 00:13:15.799 ************************************ 00:13:15.799 16:37:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:15.799 16:37:20 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:15.799 16:37:20 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.799 16:37:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.059 ************************************ 00:13:16.059 START TEST skip_rpc_with_delay 00:13:16.059 ************************************ 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:16.059 [2024-11-05 16:37:20.419134] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.059 00:13:16.059 real 0m0.050s 00:13:16.059 user 0m0.021s 00:13:16.059 sys 0m0.029s 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.059 16:37:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:13:16.059 ************************************ 00:13:16.059 END TEST skip_rpc_with_delay 00:13:16.059 ************************************ 00:13:16.059 16:37:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:13:16.059 16:37:20 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:16.059 16:37:20 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:16.059 16:37:20 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:16.059 16:37:20 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.059 16:37:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.059 ************************************ 00:13:16.059 START TEST exit_on_failed_rpc_init 00:13:16.059 ************************************ 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3511509 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3511509 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 3511509 ']' 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.059 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:16.059 [2024-11-05 16:37:20.548293] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:16.059 [2024-11-05 16:37:20.548374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511509 ] 00:13:16.319 [2024-11-05 16:37:20.671331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.319 [2024-11-05 16:37:20.726570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:13:16.578 16:37:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:13:16.578 [2024-11-05 16:37:21.004019] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:16.578 [2024-11-05 16:37:21.004087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511550 ] 00:13:16.578 [2024-11-05 16:37:21.098618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.578 [2024-11-05 16:37:21.144620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.578 [2024-11-05 16:37:21.144704] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:16.578 [2024-11-05 16:37:21.144722] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:16.578 [2024-11-05 16:37:21.144730] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3511509 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 3511509 ']' 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 3511509 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3511509 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.837 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3511509' 00:13:16.837 killing process with pid 3511509 00:13:16.838 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 3511509 00:13:16.838 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 3511509 00:13:17.097 00:13:17.097 real 0m1.095s 00:13:17.097 user 0m1.132s 00:13:17.097 sys 0m0.501s 00:13:17.097 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.097 16:37:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:17.097 ************************************ 00:13:17.097 END TEST exit_on_failed_rpc_init 00:13:17.097 ************************************ 00:13:17.097 16:37:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:13:17.097 00:13:17.097 real 0m13.618s 00:13:17.097 user 0m12.633s 00:13:17.097 sys 0m2.022s 00:13:17.097 16:37:21 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.097 16:37:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:17.097 ************************************ 00:13:17.097 END TEST skip_rpc 00:13:17.097 ************************************ 00:13:17.356 16:37:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:13:17.356 16:37:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:17.356 16:37:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.356 16:37:21 -- common/autotest_common.sh@10 -- # set +x 00:13:17.356 ************************************ 00:13:17.356 START TEST rpc_client 00:13:17.356 ************************************ 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:13:17.356 * Looking for test storage... 00:13:17.356 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@345 -- # : 1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@353 -- # local d=1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@355 -- # echo 1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@353 -- # local d=2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@355 -- # echo 2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.356 16:37:21 rpc_client -- scripts/common.sh@368 -- # return 0 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.356 --rc genhtml_branch_coverage=1 00:13:17.356 --rc genhtml_function_coverage=1 00:13:17.356 --rc genhtml_legend=1 00:13:17.356 --rc geninfo_all_blocks=1 00:13:17.356 --rc geninfo_unexecuted_blocks=1 00:13:17.356 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.356 ' 00:13:17.356 16:37:21 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.356 --rc genhtml_branch_coverage=1 00:13:17.356 --rc genhtml_function_coverage=1 00:13:17.356 --rc genhtml_legend=1 00:13:17.357 --rc geninfo_all_blocks=1 00:13:17.357 --rc geninfo_unexecuted_blocks=1 00:13:17.357 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.357 ' 00:13:17.357 16:37:21 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.357 --rc genhtml_branch_coverage=1 00:13:17.357 --rc genhtml_function_coverage=1 00:13:17.357 --rc genhtml_legend=1 00:13:17.357 --rc geninfo_all_blocks=1 00:13:17.357 --rc geninfo_unexecuted_blocks=1 00:13:17.357 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.357 ' 00:13:17.357 16:37:21 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.357 --rc genhtml_branch_coverage=1 00:13:17.357 --rc genhtml_function_coverage=1 00:13:17.357 --rc genhtml_legend=1 00:13:17.357 --rc geninfo_all_blocks=1 00:13:17.357 --rc geninfo_unexecuted_blocks=1 00:13:17.357 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.357 ' 00:13:17.357 16:37:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:13:17.357 OK 00:13:17.357 16:37:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:17.357 00:13:17.357 real 0m0.199s 00:13:17.357 user 0m0.103s 00:13:17.357 sys 0m0.107s 00:13:17.357 16:37:21 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.357 16:37:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:13:17.357 ************************************ 00:13:17.357 END TEST rpc_client 00:13:17.357 ************************************ 00:13:17.617 16:37:21 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:13:17.617 16:37:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:17.617 16:37:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.617 16:37:21 -- common/autotest_common.sh@10 -- # set +x 00:13:17.617 ************************************ 00:13:17.617 START TEST json_config 00:13:17.617 ************************************ 00:13:17.617 16:37:21 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.617 16:37:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.617 16:37:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.617 16:37:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.617 16:37:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.617 16:37:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.617 16:37:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:13:17.617 16:37:22 json_config -- scripts/common.sh@345 -- # : 1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.617 16:37:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.617 16:37:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@353 -- # local d=1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.617 16:37:22 json_config -- scripts/common.sh@355 -- # echo 1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.617 16:37:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@353 -- # local d=2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.617 16:37:22 json_config -- scripts/common.sh@355 -- # echo 2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.617 16:37:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.617 16:37:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.617 16:37:22 json_config -- scripts/common.sh@368 -- # return 0 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.617 --rc genhtml_branch_coverage=1 00:13:17.617 --rc genhtml_function_coverage=1 00:13:17.617 --rc genhtml_legend=1 00:13:17.617 --rc geninfo_all_blocks=1 00:13:17.617 --rc geninfo_unexecuted_blocks=1 00:13:17.617 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.617 ' 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.617 --rc genhtml_branch_coverage=1 00:13:17.617 --rc genhtml_function_coverage=1 00:13:17.617 --rc genhtml_legend=1 00:13:17.617 --rc geninfo_all_blocks=1 00:13:17.617 --rc geninfo_unexecuted_blocks=1 00:13:17.617 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.617 ' 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.617 --rc genhtml_branch_coverage=1 00:13:17.617 --rc genhtml_function_coverage=1 00:13:17.617 --rc genhtml_legend=1 00:13:17.617 --rc geninfo_all_blocks=1 00:13:17.617 --rc geninfo_unexecuted_blocks=1 00:13:17.617 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.617 ' 00:13:17.617 16:37:22 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.617 --rc genhtml_branch_coverage=1 00:13:17.617 --rc genhtml_function_coverage=1 00:13:17.617 --rc genhtml_legend=1 00:13:17.617 --rc geninfo_all_blocks=1 00:13:17.617 --rc geninfo_unexecuted_blocks=1 00:13:17.617 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.617 ' 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:13:17.617 16:37:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.617 16:37:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.617 16:37:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.617 16:37:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.617 16:37:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.617 16:37:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.617 16:37:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.617 16:37:22 json_config -- paths/export.sh@5 -- # export PATH 00:13:17.617 16:37:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/setup.sh 00:13:17.617 16:37:22 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:17.617 16:37:22 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:17.617 16:37:22 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@50 -- # : 0 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:17.617 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:17.617 16:37:22 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:13:17.617 WARNING: No tests are enabled so not running JSON configuration tests 00:13:17.617 16:37:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:13:17.617 00:13:17.618 real 0m0.201s 00:13:17.618 user 0m0.120s 00:13:17.618 sys 0m0.091s 00:13:17.618 16:37:22 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.618 16:37:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:17.618 ************************************ 00:13:17.618 END TEST json_config 00:13:17.618 ************************************ 00:13:17.878 16:37:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:13:17.878 16:37:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:17.878 16:37:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.878 16:37:22 -- common/autotest_common.sh@10 -- # set +x 00:13:17.878 ************************************ 00:13:17.878 START TEST json_config_extra_key 00:13:17.878 ************************************ 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.878 --rc genhtml_branch_coverage=1 00:13:17.878 --rc genhtml_function_coverage=1 00:13:17.878 --rc genhtml_legend=1 00:13:17.878 --rc geninfo_all_blocks=1 00:13:17.878 --rc geninfo_unexecuted_blocks=1 00:13:17.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.878 ' 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.878 --rc genhtml_branch_coverage=1 00:13:17.878 --rc genhtml_function_coverage=1 00:13:17.878 --rc genhtml_legend=1 00:13:17.878 --rc geninfo_all_blocks=1 00:13:17.878 --rc geninfo_unexecuted_blocks=1 00:13:17.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.878 ' 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.878 --rc genhtml_branch_coverage=1 00:13:17.878 --rc genhtml_function_coverage=1 00:13:17.878 --rc genhtml_legend=1 00:13:17.878 --rc geninfo_all_blocks=1 00:13:17.878 --rc geninfo_unexecuted_blocks=1 00:13:17.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.878 ' 00:13:17.878 16:37:22 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:17.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.878 --rc genhtml_branch_coverage=1 00:13:17.878 --rc genhtml_function_coverage=1 00:13:17.878 --rc genhtml_legend=1 00:13:17.878 --rc geninfo_all_blocks=1 00:13:17.878 --rc geninfo_unexecuted_blocks=1 00:13:17.878 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:17.878 ' 00:13:17.878 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.878 16:37:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.878 16:37:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.878 16:37:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.878 16:37:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.878 16:37:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:13:17.878 16:37:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/setup.sh 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:17.878 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:17.878 16:37:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:17.879 16:37:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:17.879 16:37:22 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:13:17.879 INFO: launching applications... 00:13:17.879 16:37:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3511896 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:17.879 Waiting for target to run... 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:13:17.879 16:37:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3511896 /var/tmp/spdk_tgt.sock 00:13:17.879 16:37:22 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 3511896 ']' 00:13:17.879 16:37:22 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:17.879 16:37:22 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:17.879 16:37:22 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:17.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:17.879 16:37:22 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:17.879 16:37:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:17.879 [2024-11-05 16:37:22.448899] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:17.879 [2024-11-05 16:37:22.448954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511896 ] 00:13:18.448 [2024-11-05 16:37:22.762818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.448 [2024-11-05 16:37:22.810767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.016 16:37:23 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:19.016 16:37:23 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:13:19.016 00:13:19.016 16:37:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:13:19.016 INFO: shutting down applications... 00:13:19.016 16:37:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3511896 ]] 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3511896 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3511896 00:13:19.016 16:37:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3511896 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:19.585 16:37:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:19.585 SPDK target shutdown done 00:13:19.585 16:37:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:13:19.585 Success 00:13:19.585 00:13:19.585 real 0m1.638s 00:13:19.585 user 0m1.504s 00:13:19.585 sys 0m0.451s 00:13:19.585 16:37:23 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:19.585 16:37:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 ************************************ 00:13:19.585 END TEST json_config_extra_key 00:13:19.585 ************************************ 00:13:19.585 16:37:23 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:19.585 16:37:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:19.585 16:37:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.585 16:37:23 -- common/autotest_common.sh@10 -- # set +x 00:13:19.585 ************************************ 00:13:19.585 START TEST alias_rpc 00:13:19.585 ************************************ 00:13:19.585 16:37:23 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:19.585 * Looking for test storage... 00:13:19.585 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:13:19.585 16:37:24 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:19.585 16:37:24 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:19.585 16:37:24 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:19.585 16:37:24 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@345 -- # : 1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:13:19.585 16:37:24 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:19.586 16:37:24 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:13:19.586 16:37:24 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:19.586 16:37:24 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:19.586 16:37:24 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:19.586 16:37:24 alias_rpc -- scripts/common.sh@368 -- # return 0 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:19.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.586 --rc genhtml_branch_coverage=1 00:13:19.586 --rc genhtml_function_coverage=1 00:13:19.586 --rc genhtml_legend=1 00:13:19.586 --rc geninfo_all_blocks=1 00:13:19.586 --rc geninfo_unexecuted_blocks=1 00:13:19.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:19.586 ' 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:19.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.586 --rc genhtml_branch_coverage=1 00:13:19.586 --rc genhtml_function_coverage=1 00:13:19.586 --rc genhtml_legend=1 00:13:19.586 --rc geninfo_all_blocks=1 00:13:19.586 --rc geninfo_unexecuted_blocks=1 00:13:19.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:19.586 ' 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:19.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.586 --rc genhtml_branch_coverage=1 00:13:19.586 --rc genhtml_function_coverage=1 00:13:19.586 --rc genhtml_legend=1 00:13:19.586 --rc geninfo_all_blocks=1 00:13:19.586 --rc geninfo_unexecuted_blocks=1 00:13:19.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:19.586 ' 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:19.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:19.586 --rc genhtml_branch_coverage=1 00:13:19.586 --rc genhtml_function_coverage=1 00:13:19.586 --rc genhtml_legend=1 00:13:19.586 --rc geninfo_all_blocks=1 00:13:19.586 --rc geninfo_unexecuted_blocks=1 00:13:19.586 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:19.586 ' 00:13:19.586 16:37:24 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:19.586 16:37:24 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:19.586 16:37:24 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3512130 00:13:19.586 16:37:24 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3512130 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 3512130 ']' 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:19.586 16:37:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.845 [2024-11-05 16:37:24.172823] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:19.845 [2024-11-05 16:37:24.172879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512130 ] 00:13:19.845 [2024-11-05 16:37:24.280364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.845 [2024-11-05 16:37:24.340483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.104 16:37:24 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:20.104 16:37:24 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:20.104 16:37:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:13:20.364 16:37:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3512130 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 3512130 ']' 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 3512130 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3512130 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3512130' 00:13:20.364 killing process with pid 3512130 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@971 -- # kill 3512130 00:13:20.364 16:37:24 alias_rpc -- common/autotest_common.sh@976 -- # wait 3512130 00:13:20.623 00:13:20.623 real 0m1.238s 00:13:20.623 user 0m1.262s 00:13:20.623 sys 0m0.485s 00:13:20.623 16:37:25 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.623 16:37:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.623 ************************************ 00:13:20.623 END TEST alias_rpc 00:13:20.623 ************************************ 00:13:20.883 16:37:25 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:13:20.883 16:37:25 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:13:20.883 16:37:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:20.883 16:37:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.883 16:37:25 -- common/autotest_common.sh@10 -- # set +x 00:13:20.883 ************************************ 00:13:20.883 START TEST spdkcli_tcp 00:13:20.883 ************************************ 00:13:20.883 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:13:20.883 * Looking for test storage... 00:13:20.883 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:13:20.883 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:20.883 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:13:20.883 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:20.883 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.883 16:37:25 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.143 16:37:25 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:21.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.143 --rc genhtml_branch_coverage=1 00:13:21.143 --rc genhtml_function_coverage=1 00:13:21.143 --rc genhtml_legend=1 00:13:21.143 --rc geninfo_all_blocks=1 00:13:21.143 --rc geninfo_unexecuted_blocks=1 00:13:21.143 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:21.143 ' 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:21.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.143 --rc genhtml_branch_coverage=1 00:13:21.143 --rc genhtml_function_coverage=1 00:13:21.143 --rc genhtml_legend=1 00:13:21.143 --rc geninfo_all_blocks=1 00:13:21.143 --rc geninfo_unexecuted_blocks=1 00:13:21.143 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:21.143 ' 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:21.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.143 --rc genhtml_branch_coverage=1 00:13:21.143 --rc genhtml_function_coverage=1 00:13:21.143 --rc genhtml_legend=1 00:13:21.143 --rc geninfo_all_blocks=1 00:13:21.143 --rc geninfo_unexecuted_blocks=1 00:13:21.143 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:21.143 ' 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:21.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.143 --rc genhtml_branch_coverage=1 00:13:21.143 --rc genhtml_function_coverage=1 00:13:21.143 --rc genhtml_legend=1 00:13:21.143 --rc geninfo_all_blocks=1 00:13:21.143 --rc geninfo_unexecuted_blocks=1 00:13:21.143 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:21.143 ' 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3512369 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3512369 00:13:21.143 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 3512369 ']' 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:21.143 16:37:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.143 [2024-11-05 16:37:25.521827] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:21.143 [2024-11-05 16:37:25.521906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512369 ] 00:13:21.143 [2024-11-05 16:37:25.647157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:21.143 [2024-11-05 16:37:25.704605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.143 [2024-11-05 16:37:25.704610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.403 16:37:25 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.403 16:37:25 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:13:21.403 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3512532 00:13:21.403 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:13:21.403 16:37:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:13:21.687 [ 00:13:21.687 "spdk_get_version", 00:13:21.687 "rpc_get_methods", 00:13:21.687 "notify_get_notifications", 00:13:21.687 "notify_get_types", 00:13:21.687 "trace_get_info", 00:13:21.687 "trace_get_tpoint_group_mask", 00:13:21.687 "trace_disable_tpoint_group", 00:13:21.687 "trace_enable_tpoint_group", 00:13:21.687 "trace_clear_tpoint_mask", 00:13:21.687 "trace_set_tpoint_mask", 00:13:21.687 "fsdev_set_opts", 00:13:21.687 "fsdev_get_opts", 00:13:21.687 "framework_get_pci_devices", 00:13:21.687 "framework_get_config", 00:13:21.687 "framework_get_subsystems", 00:13:21.687 "vfu_tgt_set_base_path", 00:13:21.687 "keyring_get_keys", 00:13:21.687 "iobuf_get_stats", 00:13:21.687 "iobuf_set_options", 00:13:21.687 "sock_get_default_impl", 00:13:21.687 "sock_set_default_impl", 00:13:21.687 "sock_impl_set_options", 00:13:21.687 "sock_impl_get_options", 00:13:21.687 "vmd_rescan", 00:13:21.687 "vmd_remove_device", 00:13:21.687 "vmd_enable", 00:13:21.687 "accel_get_stats", 00:13:21.687 "accel_set_options", 00:13:21.687 "accel_set_driver", 00:13:21.687 "accel_crypto_key_destroy", 00:13:21.687 "accel_crypto_keys_get", 00:13:21.687 "accel_crypto_key_create", 00:13:21.687 "accel_assign_opc", 00:13:21.687 "accel_get_module_info", 00:13:21.687 "accel_get_opc_assignments", 00:13:21.687 "bdev_get_histogram", 00:13:21.687 "bdev_enable_histogram", 00:13:21.687 "bdev_set_qos_limit", 00:13:21.687 "bdev_set_qd_sampling_period", 00:13:21.687 "bdev_get_bdevs", 00:13:21.687 "bdev_reset_iostat", 00:13:21.687 "bdev_get_iostat", 00:13:21.687 "bdev_examine", 00:13:21.687 "bdev_wait_for_examine", 00:13:21.687 "bdev_set_options", 00:13:21.687 "scsi_get_devices", 00:13:21.687 "thread_set_cpumask", 00:13:21.687 "scheduler_set_options", 00:13:21.687 "framework_get_governor", 00:13:21.687 "framework_get_scheduler", 00:13:21.687 "framework_set_scheduler", 00:13:21.687 "framework_get_reactors", 00:13:21.687 "thread_get_io_channels", 00:13:21.687 "thread_get_pollers", 00:13:21.687 "thread_get_stats", 00:13:21.687 "framework_monitor_context_switch", 00:13:21.687 "spdk_kill_instance", 00:13:21.687 "log_enable_timestamps", 00:13:21.687 "log_get_flags", 00:13:21.687 "log_clear_flag", 00:13:21.687 "log_set_flag", 00:13:21.687 "log_get_level", 00:13:21.687 "log_set_level", 00:13:21.687 "log_get_print_level", 00:13:21.687 "log_set_print_level", 00:13:21.687 "framework_enable_cpumask_locks", 00:13:21.687 "framework_disable_cpumask_locks", 00:13:21.687 "framework_wait_init", 00:13:21.687 "framework_start_init", 00:13:21.687 "virtio_blk_create_transport", 00:13:21.687 "virtio_blk_get_transports", 00:13:21.687 "vhost_controller_set_coalescing", 00:13:21.687 "vhost_get_controllers", 00:13:21.687 "vhost_delete_controller", 00:13:21.687 "vhost_create_blk_controller", 00:13:21.687 "vhost_scsi_controller_remove_target", 00:13:21.687 "vhost_scsi_controller_add_target", 00:13:21.687 "vhost_start_scsi_controller", 00:13:21.687 "vhost_create_scsi_controller", 00:13:21.687 "ublk_recover_disk", 00:13:21.687 "ublk_get_disks", 00:13:21.687 "ublk_stop_disk", 00:13:21.687 "ublk_start_disk", 00:13:21.687 "ublk_destroy_target", 00:13:21.687 "ublk_create_target", 00:13:21.687 "nbd_get_disks", 00:13:21.687 "nbd_stop_disk", 00:13:21.687 "nbd_start_disk", 00:13:21.688 "env_dpdk_get_mem_stats", 00:13:21.688 "nvmf_stop_mdns_prr", 00:13:21.688 "nvmf_publish_mdns_prr", 00:13:21.688 "nvmf_subsystem_get_listeners", 00:13:21.688 "nvmf_subsystem_get_qpairs", 00:13:21.688 "nvmf_subsystem_get_controllers", 00:13:21.688 "nvmf_get_stats", 00:13:21.688 "nvmf_get_transports", 00:13:21.688 "nvmf_create_transport", 00:13:21.688 "nvmf_get_targets", 00:13:21.688 "nvmf_delete_target", 00:13:21.688 "nvmf_create_target", 00:13:21.688 "nvmf_subsystem_allow_any_host", 00:13:21.688 "nvmf_subsystem_set_keys", 00:13:21.688 "nvmf_subsystem_remove_host", 00:13:21.688 "nvmf_subsystem_add_host", 00:13:21.688 "nvmf_ns_remove_host", 00:13:21.688 "nvmf_ns_add_host", 00:13:21.688 "nvmf_subsystem_remove_ns", 00:13:21.688 "nvmf_subsystem_set_ns_ana_group", 00:13:21.688 "nvmf_subsystem_add_ns", 00:13:21.688 "nvmf_subsystem_listener_set_ana_state", 00:13:21.688 "nvmf_discovery_get_referrals", 00:13:21.688 "nvmf_discovery_remove_referral", 00:13:21.688 "nvmf_discovery_add_referral", 00:13:21.688 "nvmf_subsystem_remove_listener", 00:13:21.688 "nvmf_subsystem_add_listener", 00:13:21.688 "nvmf_delete_subsystem", 00:13:21.688 "nvmf_create_subsystem", 00:13:21.688 "nvmf_get_subsystems", 00:13:21.688 "nvmf_set_crdt", 00:13:21.688 "nvmf_set_config", 00:13:21.688 "nvmf_set_max_subsystems", 00:13:21.688 "iscsi_get_histogram", 00:13:21.688 "iscsi_enable_histogram", 00:13:21.688 "iscsi_set_options", 00:13:21.688 "iscsi_get_auth_groups", 00:13:21.688 "iscsi_auth_group_remove_secret", 00:13:21.688 "iscsi_auth_group_add_secret", 00:13:21.688 "iscsi_delete_auth_group", 00:13:21.688 "iscsi_create_auth_group", 00:13:21.688 "iscsi_set_discovery_auth", 00:13:21.688 "iscsi_get_options", 00:13:21.688 "iscsi_target_node_request_logout", 00:13:21.688 "iscsi_target_node_set_redirect", 00:13:21.688 "iscsi_target_node_set_auth", 00:13:21.688 "iscsi_target_node_add_lun", 00:13:21.688 "iscsi_get_stats", 00:13:21.688 "iscsi_get_connections", 00:13:21.688 "iscsi_portal_group_set_auth", 00:13:21.688 "iscsi_start_portal_group", 00:13:21.688 "iscsi_delete_portal_group", 00:13:21.688 "iscsi_create_portal_group", 00:13:21.688 "iscsi_get_portal_groups", 00:13:21.688 "iscsi_delete_target_node", 00:13:21.688 "iscsi_target_node_remove_pg_ig_maps", 00:13:21.688 "iscsi_target_node_add_pg_ig_maps", 00:13:21.688 "iscsi_create_target_node", 00:13:21.688 "iscsi_get_target_nodes", 00:13:21.688 "iscsi_delete_initiator_group", 00:13:21.688 "iscsi_initiator_group_remove_initiators", 00:13:21.688 "iscsi_initiator_group_add_initiators", 00:13:21.688 "iscsi_create_initiator_group", 00:13:21.688 "iscsi_get_initiator_groups", 00:13:21.688 "fsdev_aio_delete", 00:13:21.688 "fsdev_aio_create", 00:13:21.688 "keyring_linux_set_options", 00:13:21.688 "keyring_file_remove_key", 00:13:21.688 "keyring_file_add_key", 00:13:21.688 "vfu_virtio_create_fs_endpoint", 00:13:21.688 "vfu_virtio_create_scsi_endpoint", 00:13:21.688 "vfu_virtio_scsi_remove_target", 00:13:21.688 "vfu_virtio_scsi_add_target", 00:13:21.688 "vfu_virtio_create_blk_endpoint", 00:13:21.688 "vfu_virtio_delete_endpoint", 00:13:21.688 "iaa_scan_accel_module", 00:13:21.688 "dsa_scan_accel_module", 00:13:21.688 "ioat_scan_accel_module", 00:13:21.688 "accel_error_inject_error", 00:13:21.688 "bdev_iscsi_delete", 00:13:21.688 "bdev_iscsi_create", 00:13:21.688 "bdev_iscsi_set_options", 00:13:21.688 "bdev_virtio_attach_controller", 00:13:21.688 "bdev_virtio_scsi_get_devices", 00:13:21.688 "bdev_virtio_detach_controller", 00:13:21.688 "bdev_virtio_blk_set_hotplug", 00:13:21.688 "bdev_ftl_set_property", 00:13:21.688 "bdev_ftl_get_properties", 00:13:21.688 "bdev_ftl_get_stats", 00:13:21.688 "bdev_ftl_unmap", 00:13:21.688 "bdev_ftl_unload", 00:13:21.688 "bdev_ftl_delete", 00:13:21.688 "bdev_ftl_load", 00:13:21.688 "bdev_ftl_create", 00:13:21.688 "bdev_aio_delete", 00:13:21.688 "bdev_aio_rescan", 00:13:21.688 "bdev_aio_create", 00:13:21.688 "blobfs_create", 00:13:21.688 "blobfs_detect", 00:13:21.688 "blobfs_set_cache_size", 00:13:21.688 "bdev_zone_block_delete", 00:13:21.688 "bdev_zone_block_create", 00:13:21.688 "bdev_delay_delete", 00:13:21.688 "bdev_delay_create", 00:13:21.688 "bdev_delay_update_latency", 00:13:21.688 "bdev_split_delete", 00:13:21.688 "bdev_split_create", 00:13:21.688 "bdev_error_inject_error", 00:13:21.688 "bdev_error_delete", 00:13:21.688 "bdev_error_create", 00:13:21.688 "bdev_raid_set_options", 00:13:21.688 "bdev_raid_remove_base_bdev", 00:13:21.688 "bdev_raid_add_base_bdev", 00:13:21.688 "bdev_raid_delete", 00:13:21.688 "bdev_raid_create", 00:13:21.688 "bdev_raid_get_bdevs", 00:13:21.688 "bdev_lvol_set_parent_bdev", 00:13:21.688 "bdev_lvol_set_parent", 00:13:21.688 "bdev_lvol_check_shallow_copy", 00:13:21.688 "bdev_lvol_start_shallow_copy", 00:13:21.688 "bdev_lvol_grow_lvstore", 00:13:21.688 "bdev_lvol_get_lvols", 00:13:21.688 "bdev_lvol_get_lvstores", 00:13:21.688 "bdev_lvol_delete", 00:13:21.688 "bdev_lvol_set_read_only", 00:13:21.688 "bdev_lvol_resize", 00:13:21.688 "bdev_lvol_decouple_parent", 00:13:21.688 "bdev_lvol_inflate", 00:13:21.688 "bdev_lvol_rename", 00:13:21.688 "bdev_lvol_clone_bdev", 00:13:21.688 "bdev_lvol_clone", 00:13:21.688 "bdev_lvol_snapshot", 00:13:21.688 "bdev_lvol_create", 00:13:21.688 "bdev_lvol_delete_lvstore", 00:13:21.688 "bdev_lvol_rename_lvstore", 00:13:21.688 "bdev_lvol_create_lvstore", 00:13:21.688 "bdev_passthru_delete", 00:13:21.688 "bdev_passthru_create", 00:13:21.688 "bdev_nvme_cuse_unregister", 00:13:21.688 "bdev_nvme_cuse_register", 00:13:21.688 "bdev_opal_new_user", 00:13:21.688 "bdev_opal_set_lock_state", 00:13:21.688 "bdev_opal_delete", 00:13:21.688 "bdev_opal_get_info", 00:13:21.688 "bdev_opal_create", 00:13:21.688 "bdev_nvme_opal_revert", 00:13:21.688 "bdev_nvme_opal_init", 00:13:21.688 "bdev_nvme_send_cmd", 00:13:21.688 "bdev_nvme_set_keys", 00:13:21.688 "bdev_nvme_get_path_iostat", 00:13:21.688 "bdev_nvme_get_mdns_discovery_info", 00:13:21.688 "bdev_nvme_stop_mdns_discovery", 00:13:21.688 "bdev_nvme_start_mdns_discovery", 00:13:21.688 "bdev_nvme_set_multipath_policy", 00:13:21.688 "bdev_nvme_set_preferred_path", 00:13:21.688 "bdev_nvme_get_io_paths", 00:13:21.688 "bdev_nvme_remove_error_injection", 00:13:21.688 "bdev_nvme_add_error_injection", 00:13:21.688 "bdev_nvme_get_discovery_info", 00:13:21.688 "bdev_nvme_stop_discovery", 00:13:21.688 "bdev_nvme_start_discovery", 00:13:21.688 "bdev_nvme_get_controller_health_info", 00:13:21.688 "bdev_nvme_disable_controller", 00:13:21.688 "bdev_nvme_enable_controller", 00:13:21.688 "bdev_nvme_reset_controller", 00:13:21.688 "bdev_nvme_get_transport_statistics", 00:13:21.688 "bdev_nvme_apply_firmware", 00:13:21.688 "bdev_nvme_detach_controller", 00:13:21.688 "bdev_nvme_get_controllers", 00:13:21.688 "bdev_nvme_attach_controller", 00:13:21.688 "bdev_nvme_set_hotplug", 00:13:21.688 "bdev_nvme_set_options", 00:13:21.688 "bdev_null_resize", 00:13:21.688 "bdev_null_delete", 00:13:21.688 "bdev_null_create", 00:13:21.688 "bdev_malloc_delete", 00:13:21.688 "bdev_malloc_create" 00:13:21.688 ] 00:13:21.689 16:37:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.689 16:37:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:21.689 16:37:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3512369 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 3512369 ']' 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 3512369 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.689 16:37:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3512369 00:13:21.948 16:37:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:21.948 16:37:26 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:21.948 16:37:26 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3512369' 00:13:21.948 killing process with pid 3512369 00:13:21.948 16:37:26 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 3512369 00:13:21.948 16:37:26 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 3512369 00:13:22.207 00:13:22.207 real 0m1.405s 00:13:22.207 user 0m2.385s 00:13:22.207 sys 0m0.576s 00:13:22.207 16:37:26 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:22.207 16:37:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 ************************************ 00:13:22.207 END TEST spdkcli_tcp 00:13:22.207 ************************************ 00:13:22.207 16:37:26 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:22.207 16:37:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:22.207 16:37:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:22.207 16:37:26 -- common/autotest_common.sh@10 -- # set +x 00:13:22.207 ************************************ 00:13:22.207 START TEST dpdk_mem_utility 00:13:22.207 ************************************ 00:13:22.207 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:22.467 * Looking for test storage... 00:13:22.467 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.467 16:37:26 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:22.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.467 --rc genhtml_branch_coverage=1 00:13:22.467 --rc genhtml_function_coverage=1 00:13:22.467 --rc genhtml_legend=1 00:13:22.467 --rc geninfo_all_blocks=1 00:13:22.467 --rc geninfo_unexecuted_blocks=1 00:13:22.467 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:22.467 ' 00:13:22.467 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:22.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.467 --rc genhtml_branch_coverage=1 00:13:22.467 --rc genhtml_function_coverage=1 00:13:22.467 --rc genhtml_legend=1 00:13:22.467 --rc geninfo_all_blocks=1 00:13:22.467 --rc geninfo_unexecuted_blocks=1 00:13:22.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:22.468 ' 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.468 --rc genhtml_branch_coverage=1 00:13:22.468 --rc genhtml_function_coverage=1 00:13:22.468 --rc genhtml_legend=1 00:13:22.468 --rc geninfo_all_blocks=1 00:13:22.468 --rc geninfo_unexecuted_blocks=1 00:13:22.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:22.468 ' 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.468 --rc genhtml_branch_coverage=1 00:13:22.468 --rc genhtml_function_coverage=1 00:13:22.468 --rc genhtml_legend=1 00:13:22.468 --rc geninfo_all_blocks=1 00:13:22.468 --rc geninfo_unexecuted_blocks=1 00:13:22.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:22.468 ' 00:13:22.468 16:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:13:22.468 16:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3512623 00:13:22.468 16:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3512623 00:13:22.468 16:37:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 3512623 ']' 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:22.468 16:37:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:22.468 [2024-11-05 16:37:26.984436] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:22.468 [2024-11-05 16:37:26.984517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512623 ] 00:13:22.727 [2024-11-05 16:37:27.110401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.727 [2024-11-05 16:37:27.165269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.986 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.986 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:13:22.986 16:37:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:13:22.986 16:37:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:13:22.986 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.986 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:22.986 { 00:13:22.986 "filename": "/tmp/spdk_mem_dump.txt" 00:13:22.986 } 00:13:22.986 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.986 16:37:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:13:22.986 DPDK memory size 810.000000 MiB in 1 heap(s) 00:13:22.986 1 heaps totaling size 810.000000 MiB 00:13:22.986 size: 810.000000 MiB heap id: 0 00:13:22.986 end heaps---------- 00:13:22.986 9 mempools totaling size 595.772034 MiB 00:13:22.986 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:13:22.986 size: 158.602051 MiB name: PDU_data_out_Pool 00:13:22.986 size: 92.545471 MiB name: bdev_io_3512623 00:13:22.986 size: 50.003479 MiB name: msgpool_3512623 00:13:22.986 size: 36.509338 MiB name: fsdev_io_3512623 00:13:22.986 size: 21.763794 MiB name: PDU_Pool 00:13:22.986 size: 19.513306 MiB name: SCSI_TASK_Pool 00:13:22.986 size: 4.133484 MiB name: evtpool_3512623 00:13:22.986 size: 0.026123 MiB name: Session_Pool 00:13:22.986 end mempools------- 00:13:22.986 6 memzones totaling size 4.142822 MiB 00:13:22.986 size: 1.000366 MiB name: RG_ring_0_3512623 00:13:22.986 size: 1.000366 MiB name: RG_ring_1_3512623 00:13:22.986 size: 1.000366 MiB name: RG_ring_4_3512623 00:13:22.986 size: 1.000366 MiB name: RG_ring_5_3512623 00:13:22.986 size: 0.125366 MiB name: RG_ring_2_3512623 00:13:22.986 size: 0.015991 MiB name: RG_ring_3_3512623 00:13:22.986 end memzones------- 00:13:22.986 16:37:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:13:22.986 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:13:22.986 list of free elements. size: 10.862488 MiB 00:13:22.986 element at address: 0x200018a00000 with size: 0.999878 MiB 00:13:22.986 element at address: 0x200018c00000 with size: 0.999878 MiB 00:13:22.986 element at address: 0x200000400000 with size: 0.998535 MiB 00:13:22.986 element at address: 0x200031800000 with size: 0.994446 MiB 00:13:22.986 element at address: 0x200008000000 with size: 0.959839 MiB 00:13:22.986 element at address: 0x200012c00000 with size: 0.954285 MiB 00:13:22.986 element at address: 0x200018e00000 with size: 0.936584 MiB 00:13:22.986 element at address: 0x200000200000 with size: 0.717346 MiB 00:13:22.986 element at address: 0x20001a600000 with size: 0.582886 MiB 00:13:22.986 element at address: 0x200000c00000 with size: 0.495422 MiB 00:13:22.986 element at address: 0x200003e00000 with size: 0.490723 MiB 00:13:22.986 element at address: 0x200019000000 with size: 0.485657 MiB 00:13:22.986 element at address: 0x200010600000 with size: 0.481934 MiB 00:13:22.986 element at address: 0x200027a00000 with size: 0.410034 MiB 00:13:22.986 element at address: 0x200000800000 with size: 0.355042 MiB 00:13:22.986 list of standard malloc elements. size: 199.218628 MiB 00:13:22.986 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:13:22.986 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:13:22.986 element at address: 0x200018afff80 with size: 1.000122 MiB 00:13:22.986 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:13:22.986 element at address: 0x200018efff80 with size: 1.000122 MiB 00:13:22.986 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:13:22.986 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:13:22.986 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:13:22.986 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:13:22.986 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20000085b040 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20000085b100 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000008df880 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200000cff000 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20001067b600 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:13:22.986 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20001a695380 with size: 0.000183 MiB 00:13:22.986 element at address: 0x20001a695440 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200027a69040 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:13:22.986 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:13:22.986 list of memzone associated elements. size: 599.918884 MiB 00:13:22.986 element at address: 0x20001a695500 with size: 211.416748 MiB 00:13:22.986 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:13:22.986 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:13:22.986 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:13:22.986 element at address: 0x200012df4780 with size: 92.045044 MiB 00:13:22.986 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3512623_0 00:13:22.986 element at address: 0x200000dff380 with size: 48.003052 MiB 00:13:22.986 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3512623_0 00:13:22.986 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:13:22.986 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3512623_0 00:13:22.986 element at address: 0x2000191be940 with size: 20.255554 MiB 00:13:22.986 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:13:22.986 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:13:22.986 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:13:22.986 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:13:22.986 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3512623_0 00:13:22.986 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:13:22.986 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3512623 00:13:22.987 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:13:22.987 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3512623 00:13:22.987 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:13:22.987 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:13:22.987 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:13:22.987 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:13:22.987 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:13:22.987 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:13:22.987 element at address: 0x200003efde40 with size: 1.008118 MiB 00:13:22.987 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:13:22.987 element at address: 0x200000cff180 with size: 1.000488 MiB 00:13:22.987 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3512623 00:13:22.987 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:13:22.987 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3512623 00:13:22.987 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:13:22.987 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3512623 00:13:22.987 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:13:22.987 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3512623 00:13:22.987 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:13:22.987 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3512623 00:13:22.987 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:13:22.987 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3512623 00:13:22.987 element at address: 0x20001067b780 with size: 0.500488 MiB 00:13:22.987 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:13:22.987 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:13:22.987 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:13:22.987 element at address: 0x20001907c540 with size: 0.250488 MiB 00:13:22.987 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:13:22.987 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:13:22.987 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3512623 00:13:22.987 element at address: 0x2000008df940 with size: 0.125488 MiB 00:13:22.987 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3512623 00:13:22.987 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:13:22.987 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:13:22.987 element at address: 0x200027a69100 with size: 0.023743 MiB 00:13:22.987 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:13:22.987 element at address: 0x2000008db680 with size: 0.016113 MiB 00:13:22.987 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3512623 00:13:22.987 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:13:22.987 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:13:22.987 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:13:22.987 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3512623 00:13:22.987 element at address: 0x2000008db480 with size: 0.000305 MiB 00:13:22.987 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3512623 00:13:22.987 element at address: 0x20000085af00 with size: 0.000305 MiB 00:13:22.987 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3512623 00:13:22.987 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:13:22.987 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:13:22.987 16:37:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:13:22.987 16:37:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3512623 00:13:22.987 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 3512623 ']' 00:13:22.987 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 3512623 00:13:22.987 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:13:22.987 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:22.987 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3512623 00:13:23.246 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.246 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.246 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3512623' 00:13:23.246 killing process with pid 3512623 00:13:23.246 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 3512623 00:13:23.246 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 3512623 00:13:23.506 00:13:23.506 real 0m1.221s 00:13:23.506 user 0m1.168s 00:13:23.506 sys 0m0.529s 00:13:23.506 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.506 16:37:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 ************************************ 00:13:23.506 END TEST dpdk_mem_utility 00:13:23.506 ************************************ 00:13:23.506 16:37:28 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:13:23.506 16:37:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:23.506 16:37:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.506 16:37:28 -- common/autotest_common.sh@10 -- # set +x 00:13:23.506 ************************************ 00:13:23.506 START TEST event 00:13:23.506 ************************************ 00:13:23.506 16:37:28 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:13:23.765 * Looking for test storage... 00:13:23.765 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:13:23.765 16:37:28 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:23.765 16:37:28 event -- common/autotest_common.sh@1691 -- # lcov --version 00:13:23.765 16:37:28 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:23.765 16:37:28 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:23.765 16:37:28 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:23.765 16:37:28 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:23.765 16:37:28 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:23.765 16:37:28 event -- scripts/common.sh@336 -- # IFS=.-: 00:13:23.765 16:37:28 event -- scripts/common.sh@336 -- # read -ra ver1 00:13:23.765 16:37:28 event -- scripts/common.sh@337 -- # IFS=.-: 00:13:23.765 16:37:28 event -- scripts/common.sh@337 -- # read -ra ver2 00:13:23.765 16:37:28 event -- scripts/common.sh@338 -- # local 'op=<' 00:13:23.765 16:37:28 event -- scripts/common.sh@340 -- # ver1_l=2 00:13:23.765 16:37:28 event -- scripts/common.sh@341 -- # ver2_l=1 00:13:23.765 16:37:28 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:23.765 16:37:28 event -- scripts/common.sh@344 -- # case "$op" in 00:13:23.765 16:37:28 event -- scripts/common.sh@345 -- # : 1 00:13:23.765 16:37:28 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:23.765 16:37:28 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:23.765 16:37:28 event -- scripts/common.sh@365 -- # decimal 1 00:13:23.765 16:37:28 event -- scripts/common.sh@353 -- # local d=1 00:13:23.765 16:37:28 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:23.765 16:37:28 event -- scripts/common.sh@355 -- # echo 1 00:13:23.765 16:37:28 event -- scripts/common.sh@365 -- # ver1[v]=1 00:13:23.765 16:37:28 event -- scripts/common.sh@366 -- # decimal 2 00:13:23.765 16:37:28 event -- scripts/common.sh@353 -- # local d=2 00:13:23.765 16:37:28 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:23.765 16:37:28 event -- scripts/common.sh@355 -- # echo 2 00:13:23.765 16:37:28 event -- scripts/common.sh@366 -- # ver2[v]=2 00:13:23.766 16:37:28 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:23.766 16:37:28 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:23.766 16:37:28 event -- scripts/common.sh@368 -- # return 0 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:23.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.766 --rc genhtml_branch_coverage=1 00:13:23.766 --rc genhtml_function_coverage=1 00:13:23.766 --rc genhtml_legend=1 00:13:23.766 --rc geninfo_all_blocks=1 00:13:23.766 --rc geninfo_unexecuted_blocks=1 00:13:23.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:23.766 ' 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:23.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.766 --rc genhtml_branch_coverage=1 00:13:23.766 --rc genhtml_function_coverage=1 00:13:23.766 --rc genhtml_legend=1 00:13:23.766 --rc geninfo_all_blocks=1 00:13:23.766 --rc geninfo_unexecuted_blocks=1 00:13:23.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:23.766 ' 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:23.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.766 --rc genhtml_branch_coverage=1 00:13:23.766 --rc genhtml_function_coverage=1 00:13:23.766 --rc genhtml_legend=1 00:13:23.766 --rc geninfo_all_blocks=1 00:13:23.766 --rc geninfo_unexecuted_blocks=1 00:13:23.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:23.766 ' 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:23.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:23.766 --rc genhtml_branch_coverage=1 00:13:23.766 --rc genhtml_function_coverage=1 00:13:23.766 --rc genhtml_legend=1 00:13:23.766 --rc geninfo_all_blocks=1 00:13:23.766 --rc geninfo_unexecuted_blocks=1 00:13:23.766 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:23.766 ' 00:13:23.766 16:37:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:13:23.766 16:37:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:13:23.766 16:37:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:13:23.766 16:37:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.766 16:37:28 event -- common/autotest_common.sh@10 -- # set +x 00:13:23.766 ************************************ 00:13:23.766 START TEST event_perf 00:13:23.766 ************************************ 00:13:23.766 16:37:28 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:23.766 Running I/O for 1 seconds...[2024-11-05 16:37:28.326554] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:23.766 [2024-11-05 16:37:28.326636] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512866 ] 00:13:24.025 [2024-11-05 16:37:28.451663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.025 [2024-11-05 16:37:28.511049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.025 [2024-11-05 16:37:28.511149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.025 [2024-11-05 16:37:28.511240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.025 [2024-11-05 16:37:28.511244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.401 Running I/O for 1 seconds... 00:13:25.401 lcore 0: 177150 00:13:25.401 lcore 1: 177148 00:13:25.401 lcore 2: 177150 00:13:25.401 lcore 3: 177151 00:13:25.401 done. 00:13:25.401 00:13:25.401 real 0m1.252s 00:13:25.401 user 0m4.124s 00:13:25.401 sys 0m0.123s 00:13:25.401 16:37:29 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.402 16:37:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:13:25.402 ************************************ 00:13:25.402 END TEST event_perf 00:13:25.402 ************************************ 00:13:25.402 16:37:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:13:25.402 16:37:29 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:25.402 16:37:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.402 16:37:29 event -- common/autotest_common.sh@10 -- # set +x 00:13:25.402 ************************************ 00:13:25.402 START TEST event_reactor 00:13:25.402 ************************************ 00:13:25.402 16:37:29 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:13:25.402 [2024-11-05 16:37:29.653765] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:25.402 [2024-11-05 16:37:29.653846] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513066 ] 00:13:25.402 [2024-11-05 16:37:29.778963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.402 [2024-11-05 16:37:29.833723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.344 test_start 00:13:26.344 oneshot 00:13:26.344 tick 100 00:13:26.344 tick 100 00:13:26.344 tick 250 00:13:26.344 tick 100 00:13:26.344 tick 100 00:13:26.344 tick 100 00:13:26.344 tick 250 00:13:26.344 tick 500 00:13:26.344 tick 100 00:13:26.344 tick 100 00:13:26.344 tick 250 00:13:26.344 tick 100 00:13:26.344 tick 100 00:13:26.344 test_end 00:13:26.344 00:13:26.344 real 0m1.244s 00:13:26.344 user 0m1.114s 00:13:26.344 sys 0m0.123s 00:13:26.344 16:37:30 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.344 16:37:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:13:26.344 ************************************ 00:13:26.344 END TEST event_reactor 00:13:26.344 ************************************ 00:13:26.344 16:37:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:26.344 16:37:30 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:26.344 16:37:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:26.344 16:37:30 event -- common/autotest_common.sh@10 -- # set +x 00:13:26.603 ************************************ 00:13:26.603 START TEST event_reactor_perf 00:13:26.603 ************************************ 00:13:26.603 16:37:30 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:26.603 [2024-11-05 16:37:30.961930] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:26.603 [2024-11-05 16:37:30.962011] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513257 ] 00:13:26.603 [2024-11-05 16:37:31.087567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.603 [2024-11-05 16:37:31.142866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.983 test_start 00:13:27.983 test_end 00:13:27.983 Performance: 607313 events per second 00:13:27.983 00:13:27.983 real 0m1.245s 00:13:27.983 user 0m1.113s 00:13:27.983 sys 0m0.126s 00:13:27.983 16:37:32 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:27.983 16:37:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 ************************************ 00:13:27.983 END TEST event_reactor_perf 00:13:27.983 ************************************ 00:13:27.983 16:37:32 event -- event/event.sh@49 -- # uname -s 00:13:27.983 16:37:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:13:27.983 16:37:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:13:27.983 16:37:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:27.983 16:37:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:27.983 16:37:32 event -- common/autotest_common.sh@10 -- # set +x 00:13:27.983 ************************************ 00:13:27.983 START TEST event_scheduler 00:13:27.983 ************************************ 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:13:27.983 * Looking for test storage... 00:13:27.983 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.983 16:37:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.983 --rc genhtml_branch_coverage=1 00:13:27.983 --rc genhtml_function_coverage=1 00:13:27.983 --rc genhtml_legend=1 00:13:27.983 --rc geninfo_all_blocks=1 00:13:27.983 --rc geninfo_unexecuted_blocks=1 00:13:27.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:27.983 ' 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.983 --rc genhtml_branch_coverage=1 00:13:27.983 --rc genhtml_function_coverage=1 00:13:27.983 --rc genhtml_legend=1 00:13:27.983 --rc geninfo_all_blocks=1 00:13:27.983 --rc geninfo_unexecuted_blocks=1 00:13:27.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:27.983 ' 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.983 --rc genhtml_branch_coverage=1 00:13:27.983 --rc genhtml_function_coverage=1 00:13:27.983 --rc genhtml_legend=1 00:13:27.983 --rc geninfo_all_blocks=1 00:13:27.983 --rc geninfo_unexecuted_blocks=1 00:13:27.983 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:27.983 ' 00:13:27.983 16:37:32 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:27.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.983 --rc genhtml_branch_coverage=1 00:13:27.983 --rc genhtml_function_coverage=1 00:13:27.983 --rc genhtml_legend=1 00:13:27.984 --rc geninfo_all_blocks=1 00:13:27.984 --rc geninfo_unexecuted_blocks=1 00:13:27.984 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:27.984 ' 00:13:27.984 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:13:27.984 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3513534 00:13:27.984 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:13:27.984 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:13:27.984 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3513534 00:13:27.984 16:37:32 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 3513534 ']' 00:13:27.984 16:37:32 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.984 16:37:32 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:27.984 16:37:32 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.984 16:37:32 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:27.984 16:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:27.984 [2024-11-05 16:37:32.489245] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:27.984 [2024-11-05 16:37:32.489329] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513534 ] 00:13:28.243 [2024-11-05 16:37:32.591329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.243 [2024-11-05 16:37:32.638428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.243 [2024-11-05 16:37:32.638518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.243 [2024-11-05 16:37:32.638605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.243 [2024-11-05 16:37:32.638607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:13:28.243 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:28.243 [2024-11-05 16:37:32.743470] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:13:28.243 [2024-11-05 16:37:32.743491] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:13:28.243 [2024-11-05 16:37:32.743502] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:13:28.243 [2024-11-05 16:37:32.743510] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:13:28.243 [2024-11-05 16:37:32.743517] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.243 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:28.243 [2024-11-05 16:37:32.819388] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.243 16:37:32 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:28.243 16:37:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:28.502 ************************************ 00:13:28.502 START TEST scheduler_create_thread 00:13:28.502 ************************************ 00:13:28.502 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 2 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 3 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 4 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 5 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 6 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 7 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 8 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 9 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 10 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.503 16:37:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:29.885 16:37:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.885 16:37:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:13:29.885 16:37:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:13:29.885 16:37:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.885 16:37:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:30.998 16:37:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.998 00:13:30.998 real 0m2.618s 00:13:30.998 user 0m0.014s 00:13:30.998 sys 0m0.004s 00:13:30.998 16:37:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:30.998 16:37:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:30.998 ************************************ 00:13:30.998 END TEST scheduler_create_thread 00:13:30.998 ************************************ 00:13:30.998 16:37:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:30.998 16:37:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3513534 00:13:30.998 16:37:35 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 3513534 ']' 00:13:30.998 16:37:35 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 3513534 00:13:30.998 16:37:35 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:13:30.998 16:37:35 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.998 16:37:35 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3513534 00:13:31.257 16:37:35 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:31.257 16:37:35 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:31.257 16:37:35 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3513534' 00:13:31.257 killing process with pid 3513534 00:13:31.257 16:37:35 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 3513534 00:13:31.257 16:37:35 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 3513534 00:13:31.516 [2024-11-05 16:37:35.962047] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:13:31.775 00:13:31.775 real 0m3.861s 00:13:31.775 user 0m5.905s 00:13:31.775 sys 0m0.461s 00:13:31.775 16:37:36 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.775 16:37:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:31.775 ************************************ 00:13:31.776 END TEST event_scheduler 00:13:31.776 ************************************ 00:13:31.776 16:37:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:13:31.776 16:37:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:13:31.776 16:37:36 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:31.776 16:37:36 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.776 16:37:36 event -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 ************************************ 00:13:31.776 START TEST app_repeat 00:13:31.776 ************************************ 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3514068 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3514068' 00:13:31.776 Process app_repeat pid: 3514068 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:13:31.776 spdk_app_start Round 0 00:13:31.776 16:37:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3514068 /var/tmp/spdk-nbd.sock 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3514068 ']' 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:31.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:31.776 16:37:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:31.776 [2024-11-05 16:37:36.245208] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:31.776 [2024-11-05 16:37:36.245297] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514068 ] 00:13:32.035 [2024-11-05 16:37:36.372546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:32.035 [2024-11-05 16:37:36.431573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.035 [2024-11-05 16:37:36.431579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.035 16:37:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.035 16:37:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:13:32.035 16:37:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:32.294 Malloc0 00:13:32.294 16:37:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:32.553 Malloc1 00:13:32.553 16:37:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.553 16:37:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:32.812 /dev/nbd0 00:13:32.812 16:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.812 16:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:32.812 1+0 records in 00:13:32.812 1+0 records out 00:13:32.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257498 s, 15.9 MB/s 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:32.812 16:37:37 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:13:32.812 16:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.812 16:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:32.812 16:37:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:33.071 /dev/nbd1 00:13:33.071 16:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.071 16:37:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.071 16:37:37 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:33.071 1+0 records in 00:13:33.071 1+0 records out 00:13:33.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257502 s, 15.9 MB/s 00:13:33.072 16:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:33.072 16:37:37 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:13:33.072 16:37:37 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:33.072 16:37:37 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.072 16:37:37 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:13:33.072 16:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.072 16:37:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.072 16:37:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:33.072 16:37:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.072 16:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:33.330 16:37:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:33.330 { 00:13:33.330 "nbd_device": "/dev/nbd0", 00:13:33.330 "bdev_name": "Malloc0" 00:13:33.330 }, 00:13:33.330 { 00:13:33.330 "nbd_device": "/dev/nbd1", 00:13:33.330 "bdev_name": "Malloc1" 00:13:33.330 } 00:13:33.330 ]' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:33.331 { 00:13:33.331 "nbd_device": "/dev/nbd0", 00:13:33.331 "bdev_name": "Malloc0" 00:13:33.331 }, 00:13:33.331 { 00:13:33.331 "nbd_device": "/dev/nbd1", 00:13:33.331 "bdev_name": "Malloc1" 00:13:33.331 } 00:13:33.331 ]' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:33.331 /dev/nbd1' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:33.331 /dev/nbd1' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:33.331 256+0 records in 00:13:33.331 256+0 records out 00:13:33.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117415 s, 89.3 MB/s 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.331 16:37:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:33.591 256+0 records in 00:13:33.591 256+0 records out 00:13:33.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289719 s, 36.2 MB/s 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:33.591 256+0 records in 00:13:33.591 256+0 records out 00:13:33.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310795 s, 33.7 MB/s 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.591 16:37:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.850 16:37:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.109 16:37:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:34.368 16:37:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:34.368 16:37:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:34.627 16:37:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:34.886 [2024-11-05 16:37:39.330834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.886 [2024-11-05 16:37:39.386151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.886 [2024-11-05 16:37:39.386156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.886 [2024-11-05 16:37:39.437426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:34.887 [2024-11-05 16:37:39.437482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:38.173 16:37:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:38.173 16:37:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:13:38.173 spdk_app_start Round 1 00:13:38.173 16:37:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3514068 /var/tmp/spdk-nbd.sock 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3514068 ']' 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:38.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:38.173 16:37:42 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:13:38.173 16:37:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:38.173 Malloc0 00:13:38.173 16:37:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:38.432 Malloc1 00:13:38.432 16:37:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.432 16:37:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:38.433 16:37:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:38.692 /dev/nbd0 00:13:38.692 16:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:38.692 16:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:38.692 16:37:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:38.950 1+0 records in 00:13:38.950 1+0 records out 00:13:38.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025497 s, 16.1 MB/s 00:13:38.950 16:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:38.950 16:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:13:38.951 16:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.951 16:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:38.951 16:37:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:38.951 /dev/nbd1 00:13:38.951 16:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.951 16:37:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:38.951 16:37:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:38.951 1+0 records in 00:13:38.951 1+0 records out 00:13:38.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295236 s, 13.9 MB/s 00:13:39.210 16:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:39.210 16:37:43 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:13:39.210 16:37:43 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:39.210 16:37:43 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:39.210 16:37:43 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:39.210 { 00:13:39.210 "nbd_device": "/dev/nbd0", 00:13:39.210 "bdev_name": "Malloc0" 00:13:39.210 }, 00:13:39.210 { 00:13:39.210 "nbd_device": "/dev/nbd1", 00:13:39.210 "bdev_name": "Malloc1" 00:13:39.210 } 00:13:39.210 ]' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:39.210 { 00:13:39.210 "nbd_device": "/dev/nbd0", 00:13:39.210 "bdev_name": "Malloc0" 00:13:39.210 }, 00:13:39.210 { 00:13:39.210 "nbd_device": "/dev/nbd1", 00:13:39.210 "bdev_name": "Malloc1" 00:13:39.210 } 00:13:39.210 ]' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:39.210 /dev/nbd1' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:39.210 /dev/nbd1' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:39.210 16:37:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:39.469 256+0 records in 00:13:39.469 256+0 records out 00:13:39.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010646 s, 98.5 MB/s 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:39.469 256+0 records in 00:13:39.469 256+0 records out 00:13:39.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256118 s, 40.9 MB/s 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:39.469 256+0 records in 00:13:39.469 256+0 records out 00:13:39.469 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310558 s, 33.8 MB/s 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.469 16:37:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.728 16:37:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.987 16:37:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:40.246 16:37:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:40.246 16:37:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:40.505 16:37:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:40.765 [2024-11-05 16:37:45.292651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.765 [2024-11-05 16:37:45.347497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.765 [2024-11-05 16:37:45.347502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.024 [2024-11-05 16:37:45.399209] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:41.024 [2024-11-05 16:37:45.399265] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:43.558 16:37:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:43.558 16:37:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:13:43.558 spdk_app_start Round 2 00:13:43.558 16:37:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3514068 /var/tmp/spdk-nbd.sock 00:13:43.558 16:37:48 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3514068 ']' 00:13:43.558 16:37:48 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:43.558 16:37:48 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:43.558 16:37:48 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:43.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:43.558 16:37:48 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:43.558 16:37:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:43.818 16:37:48 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:43.818 16:37:48 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:13:43.818 16:37:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:44.077 Malloc0 00:13:44.077 16:37:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:44.336 Malloc1 00:13:44.336 16:37:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.336 16:37:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:44.904 /dev/nbd0 00:13:44.904 16:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:44.904 16:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:44.904 1+0 records in 00:13:44.904 1+0 records out 00:13:44.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018965 s, 21.6 MB/s 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:44.904 16:37:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:13:44.904 16:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.904 16:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:44.905 16:37:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:45.164 /dev/nbd1 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:45.164 1+0 records in 00:13:45.164 1+0 records out 00:13:45.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180871 s, 22.6 MB/s 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:45.164 16:37:49 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.164 16:37:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:45.424 { 00:13:45.424 "nbd_device": "/dev/nbd0", 00:13:45.424 "bdev_name": "Malloc0" 00:13:45.424 }, 00:13:45.424 { 00:13:45.424 "nbd_device": "/dev/nbd1", 00:13:45.424 "bdev_name": "Malloc1" 00:13:45.424 } 00:13:45.424 ]' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:45.424 { 00:13:45.424 "nbd_device": "/dev/nbd0", 00:13:45.424 "bdev_name": "Malloc0" 00:13:45.424 }, 00:13:45.424 { 00:13:45.424 "nbd_device": "/dev/nbd1", 00:13:45.424 "bdev_name": "Malloc1" 00:13:45.424 } 00:13:45.424 ]' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:45.424 /dev/nbd1' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:45.424 /dev/nbd1' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:45.424 256+0 records in 00:13:45.424 256+0 records out 00:13:45.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117608 s, 89.2 MB/s 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:45.424 256+0 records in 00:13:45.424 256+0 records out 00:13:45.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290782 s, 36.1 MB/s 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:45.424 256+0 records in 00:13:45.424 256+0 records out 00:13:45.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310279 s, 33.8 MB/s 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:45.424 16:37:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.425 16:37:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.683 16:37:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:45.942 16:37:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:46.200 16:37:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:46.459 16:37:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:46.459 16:37:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:46.717 16:37:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:46.976 [2024-11-05 16:37:51.348759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.976 [2024-11-05 16:37:51.404294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.976 [2024-11-05 16:37:51.404299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.976 [2024-11-05 16:37:51.449027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:13:46.976 [2024-11-05 16:37:51.449079] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:13:50.260 16:37:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3514068 /var/tmp/spdk-nbd.sock 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 3514068 ']' 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:50.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:13:50.260 16:37:54 event.app_repeat -- event/event.sh@39 -- # killprocess 3514068 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 3514068 ']' 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 3514068 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3514068 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3514068' 00:13:50.260 killing process with pid 3514068 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@971 -- # kill 3514068 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@976 -- # wait 3514068 00:13:50.260 spdk_app_start is called in Round 0. 00:13:50.260 Shutdown signal received, stop current app iteration 00:13:50.260 Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 reinitialization... 00:13:50.260 spdk_app_start is called in Round 1. 00:13:50.260 Shutdown signal received, stop current app iteration 00:13:50.260 Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 reinitialization... 00:13:50.260 spdk_app_start is called in Round 2. 00:13:50.260 Shutdown signal received, stop current app iteration 00:13:50.260 Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 reinitialization... 00:13:50.260 spdk_app_start is called in Round 3. 00:13:50.260 Shutdown signal received, stop current app iteration 00:13:50.260 16:37:54 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:13:50.260 16:37:54 event.app_repeat -- event/event.sh@42 -- # return 0 00:13:50.260 00:13:50.260 real 0m18.348s 00:13:50.260 user 0m40.076s 00:13:50.260 sys 0m3.913s 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.260 16:37:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 ************************************ 00:13:50.260 END TEST app_repeat 00:13:50.260 ************************************ 00:13:50.260 16:37:54 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:13:50.260 16:37:54 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:13:50.260 16:37:54 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:50.260 16:37:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.260 16:37:54 event -- common/autotest_common.sh@10 -- # set +x 00:13:50.260 ************************************ 00:13:50.260 START TEST cpu_locks 00:13:50.260 ************************************ 00:13:50.260 16:37:54 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:13:50.260 * Looking for test storage... 00:13:50.260 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:13:50.260 16:37:54 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:50.260 16:37:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:13:50.260 16:37:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:50.260 16:37:54 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.260 16:37:54 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.519 16:37:54 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:50.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.519 --rc genhtml_branch_coverage=1 00:13:50.519 --rc genhtml_function_coverage=1 00:13:50.519 --rc genhtml_legend=1 00:13:50.519 --rc geninfo_all_blocks=1 00:13:50.519 --rc geninfo_unexecuted_blocks=1 00:13:50.519 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:50.519 ' 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:50.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.519 --rc genhtml_branch_coverage=1 00:13:50.519 --rc genhtml_function_coverage=1 00:13:50.519 --rc genhtml_legend=1 00:13:50.519 --rc geninfo_all_blocks=1 00:13:50.519 --rc geninfo_unexecuted_blocks=1 00:13:50.519 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:50.519 ' 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:50.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.519 --rc genhtml_branch_coverage=1 00:13:50.519 --rc genhtml_function_coverage=1 00:13:50.519 --rc genhtml_legend=1 00:13:50.519 --rc geninfo_all_blocks=1 00:13:50.519 --rc geninfo_unexecuted_blocks=1 00:13:50.519 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:50.519 ' 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:50.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.519 --rc genhtml_branch_coverage=1 00:13:50.519 --rc genhtml_function_coverage=1 00:13:50.519 --rc genhtml_legend=1 00:13:50.519 --rc geninfo_all_blocks=1 00:13:50.519 --rc geninfo_unexecuted_blocks=1 00:13:50.519 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:13:50.519 ' 00:13:50.519 16:37:54 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:13:50.519 16:37:54 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:13:50.519 16:37:54 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:13:50.519 16:37:54 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.519 16:37:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:50.519 ************************************ 00:13:50.519 START TEST default_locks 00:13:50.519 ************************************ 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3516747 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3516747 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3516747 ']' 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:50.519 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.520 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:50.520 16:37:54 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:50.520 [2024-11-05 16:37:54.917491] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:50.520 [2024-11-05 16:37:54.917559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3516747 ] 00:13:50.520 [2024-11-05 16:37:55.025101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.520 [2024-11-05 16:37:55.080806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.454 16:37:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:51.454 16:37:55 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:13:51.454 16:37:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3516747 00:13:51.454 16:37:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3516747 00:13:51.454 16:37:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:52.019 lslocks: write error 00:13:52.019 16:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3516747 00:13:52.019 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 3516747 ']' 00:13:52.019 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 3516747 00:13:52.019 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:13:52.019 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:52.019 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3516747 00:13:52.020 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:52.020 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:52.020 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3516747' 00:13:52.020 killing process with pid 3516747 00:13:52.020 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 3516747 00:13:52.020 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 3516747 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3516747 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3516747 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3516747 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 3516747 ']' 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:52.586 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3516747) - No such process 00:13:52.586 ERROR: process (pid: 3516747) is no longer running 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:52.586 00:13:52.586 real 0m2.056s 00:13:52.586 user 0m2.181s 00:13:52.586 sys 0m0.786s 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:52.586 16:37:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:13:52.586 ************************************ 00:13:52.586 END TEST default_locks 00:13:52.586 ************************************ 00:13:52.586 16:37:56 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:13:52.586 16:37:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:52.586 16:37:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:52.586 16:37:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:52.586 ************************************ 00:13:52.586 START TEST default_locks_via_rpc 00:13:52.586 ************************************ 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3517123 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3517123 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3517123 ']' 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:52.586 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.586 [2024-11-05 16:37:57.056623] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:52.586 [2024-11-05 16:37:57.056694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517123 ] 00:13:52.845 [2024-11-05 16:37:57.183989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.845 [2024-11-05 16:37:57.239710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3517123 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3517123 00:13:53.103 16:37:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:53.669 16:37:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3517123 00:13:53.669 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 3517123 ']' 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 3517123 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3517123 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3517123' 00:13:53.670 killing process with pid 3517123 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 3517123 00:13:53.670 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 3517123 00:13:53.928 00:13:53.928 real 0m1.426s 00:13:53.928 user 0m1.397s 00:13:53.928 sys 0m0.687s 00:13:53.928 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:53.928 16:37:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:53.928 ************************************ 00:13:53.928 END TEST default_locks_via_rpc 00:13:53.928 ************************************ 00:13:53.928 16:37:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:53.928 16:37:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:53.928 16:37:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:53.928 16:37:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:54.186 ************************************ 00:13:54.186 START TEST non_locking_app_on_locked_coremask 00:13:54.186 ************************************ 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3517332 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3517332 /var/tmp/spdk.sock 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3517332 ']' 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:54.186 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:54.186 [2024-11-05 16:37:58.549670] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:54.186 [2024-11-05 16:37:58.549733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517332 ] 00:13:54.186 [2024-11-05 16:37:58.674242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.186 [2024-11-05 16:37:58.732058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3517341 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3517341 /var/tmp/spdk2.sock 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3517341 ']' 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:54.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:54.444 16:37:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:54.444 [2024-11-05 16:37:59.001149] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:54.444 [2024-11-05 16:37:59.001202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517341 ] 00:13:54.702 [2024-11-05 16:37:59.149341] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:54.702 [2024-11-05 16:37:59.149381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.702 [2024-11-05 16:37:59.260619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.635 16:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:55.635 16:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:55.635 16:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3517332 00:13:55.635 16:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3517332 00:13:55.635 16:37:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:55.892 lslocks: write error 00:13:55.892 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3517332 00:13:55.892 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3517332 ']' 00:13:55.892 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3517332 00:13:55.892 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:55.892 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.892 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3517332 00:13:56.150 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:56.150 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:56.150 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3517332' 00:13:56.150 killing process with pid 3517332 00:13:56.150 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3517332 00:13:56.150 16:38:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3517332 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3517341 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3517341 ']' 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3517341 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3517341 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3517341' 00:13:56.717 killing process with pid 3517341 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3517341 00:13:56.717 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3517341 00:13:57.282 00:13:57.282 real 0m3.080s 00:13:57.282 user 0m3.220s 00:13:57.282 sys 0m1.075s 00:13:57.282 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.282 16:38:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:57.282 ************************************ 00:13:57.282 END TEST non_locking_app_on_locked_coremask 00:13:57.282 ************************************ 00:13:57.282 16:38:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:57.282 16:38:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:57.283 16:38:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.283 16:38:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:57.283 ************************************ 00:13:57.283 START TEST locking_app_on_unlocked_coremask 00:13:57.283 ************************************ 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3517728 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3517728 /var/tmp/spdk.sock 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3517728 ']' 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.283 16:38:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:57.283 [2024-11-05 16:38:01.712483] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:57.283 [2024-11-05 16:38:01.712541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517728 ] 00:13:57.283 [2024-11-05 16:38:01.833767] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:57.283 [2024-11-05 16:38:01.833806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.541 [2024-11-05 16:38:01.891511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3517856 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3517856 /var/tmp/spdk2.sock 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3517856 ']' 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:57.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.798 16:38:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:57.798 [2024-11-05 16:38:02.170406] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:13:57.798 [2024-11-05 16:38:02.170483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517856 ] 00:13:57.798 [2024-11-05 16:38:02.337789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.056 [2024-11-05 16:38:02.450314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.621 16:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.621 16:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:13:58.621 16:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3517856 00:13:58.621 16:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3517856 00:13:58.621 16:38:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:59.996 lslocks: write error 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3517728 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3517728 ']' 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3517728 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3517728 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3517728' 00:13:59.996 killing process with pid 3517728 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3517728 00:13:59.996 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3517728 00:14:00.562 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3517856 00:14:00.562 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3517856 ']' 00:14:00.562 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 3517856 00:14:00.562 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:14:00.562 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.562 16:38:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3517856 00:14:00.562 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:00.562 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:00.562 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3517856' 00:14:00.562 killing process with pid 3517856 00:14:00.562 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 3517856 00:14:00.562 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 3517856 00:14:00.821 00:14:00.821 real 0m3.719s 00:14:00.821 user 0m3.919s 00:14:00.821 sys 0m1.442s 00:14:00.821 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:00.821 16:38:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:00.821 ************************************ 00:14:00.821 END TEST locking_app_on_unlocked_coremask 00:14:00.821 ************************************ 00:14:01.079 16:38:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:14:01.079 16:38:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:01.079 16:38:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:01.079 16:38:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:01.079 ************************************ 00:14:01.079 START TEST locking_app_on_locked_coremask 00:14:01.079 ************************************ 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3518289 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3518289 /var/tmp/spdk.sock 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3518289 ']' 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.079 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:01.079 [2024-11-05 16:38:05.509571] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:01.079 [2024-11-05 16:38:05.509629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518289 ] 00:14:01.079 [2024-11-05 16:38:05.633783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.337 [2024-11-05 16:38:05.689930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.595 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.595 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:14:01.595 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3518297 00:14:01.595 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3518297 /var/tmp/spdk2.sock 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3518297 /var/tmp/spdk2.sock 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3518297 /var/tmp/spdk2.sock 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 3518297 ']' 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:01.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.596 16:38:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:01.596 [2024-11-05 16:38:05.970447] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:01.596 [2024-11-05 16:38:05.970516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518297 ] 00:14:01.596 [2024-11-05 16:38:06.136386] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3518289 has claimed it. 00:14:01.596 [2024-11-05 16:38:06.136439] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:02.159 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3518297) - No such process 00:14:02.159 ERROR: process (pid: 3518297) is no longer running 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3518289 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3518289 00:14:02.160 16:38:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:02.724 lslocks: write error 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3518289 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 3518289 ']' 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 3518289 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3518289 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3518289' 00:14:02.724 killing process with pid 3518289 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 3518289 00:14:02.724 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 3518289 00:14:02.982 00:14:02.982 real 0m2.036s 00:14:02.982 user 0m2.125s 00:14:02.982 sys 0m0.810s 00:14:02.982 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:02.982 16:38:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:02.982 ************************************ 00:14:02.982 END TEST locking_app_on_locked_coremask 00:14:02.982 ************************************ 00:14:02.982 16:38:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:14:02.982 16:38:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:02.982 16:38:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:02.982 16:38:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:03.240 ************************************ 00:14:03.240 START TEST locking_overlapped_coremask 00:14:03.240 ************************************ 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3518625 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3518625 /var/tmp/spdk.sock 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3518625 ']' 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:03.240 16:38:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:03.240 [2024-11-05 16:38:07.612024] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:03.240 [2024-11-05 16:38:07.612083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518625 ] 00:14:03.240 [2024-11-05 16:38:07.724898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.240 [2024-11-05 16:38:07.787826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.240 [2024-11-05 16:38:07.787912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.241 [2024-11-05 16:38:07.787917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3518676 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3518676 /var/tmp/spdk2.sock 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3518676 /var/tmp/spdk2.sock 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3518676 /var/tmp/spdk2.sock 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 3518676 ']' 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:03.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:03.499 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:03.499 [2024-11-05 16:38:08.070656] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:03.499 [2024-11-05 16:38:08.070741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518676 ] 00:14:03.757 [2024-11-05 16:38:08.205326] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3518625 has claimed it. 00:14:03.757 [2024-11-05 16:38:08.205364] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:04.322 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (3518676) - No such process 00:14:04.322 ERROR: process (pid: 3518676) is no longer running 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3518625 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 3518625 ']' 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 3518625 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3518625 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3518625' 00:14:04.322 killing process with pid 3518625 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 3518625 00:14:04.322 16:38:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 3518625 00:14:04.889 00:14:04.889 real 0m1.654s 00:14:04.889 user 0m4.611s 00:14:04.889 sys 0m0.520s 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:04.889 ************************************ 00:14:04.889 END TEST locking_overlapped_coremask 00:14:04.889 ************************************ 00:14:04.889 16:38:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:14:04.889 16:38:09 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:04.889 16:38:09 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.889 16:38:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:04.889 ************************************ 00:14:04.889 START TEST locking_overlapped_coremask_via_rpc 00:14:04.889 ************************************ 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3518878 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3518878 /var/tmp/spdk.sock 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3518878 ']' 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:04.889 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.889 [2024-11-05 16:38:09.327999] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:04.889 [2024-11-05 16:38:09.328041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518878 ] 00:14:04.889 [2024-11-05 16:38:09.424686] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:04.889 [2024-11-05 16:38:09.424729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.147 [2024-11-05 16:38:09.488730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.147 [2024-11-05 16:38:09.488753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.147 [2024-11-05 16:38:09.488758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.405 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:05.405 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:14:05.405 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3518890 00:14:05.405 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3518890 /var/tmp/spdk2.sock 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3518890 ']' 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:05.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.406 16:38:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:14:05.406 [2024-11-05 16:38:09.759526] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:05.406 [2024-11-05 16:38:09.759606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3518890 ] 00:14:05.406 [2024-11-05 16:38:09.894516] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:05.406 [2024-11-05 16:38:09.894544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.406 [2024-11-05 16:38:09.989332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.663 [2024-11-05 16:38:09.992739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:05.663 [2024-11-05 16:38:09.992741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.228 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.228 [2024-11-05 16:38:10.804783] app.c: 782:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3518878 has claimed it. 00:14:06.228 request: 00:14:06.228 { 00:14:06.228 "method": "framework_enable_cpumask_locks", 00:14:06.228 "req_id": 1 00:14:06.228 } 00:14:06.228 Got JSON-RPC error response 00:14:06.228 response: 00:14:06.228 { 00:14:06.486 "code": -32603, 00:14:06.486 "message": "Failed to claim CPU core: 2" 00:14:06.486 } 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3518878 /var/tmp/spdk.sock 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3518878 ']' 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.486 16:38:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3518890 /var/tmp/spdk2.sock 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 3518890 ']' 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:06.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.744 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:07.002 00:14:07.002 real 0m2.076s 00:14:07.002 user 0m1.106s 00:14:07.002 sys 0m0.219s 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.002 16:38:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.002 ************************************ 00:14:07.002 END TEST locking_overlapped_coremask_via_rpc 00:14:07.002 ************************************ 00:14:07.002 16:38:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:14:07.002 16:38:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3518878 ]] 00:14:07.002 16:38:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3518878 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3518878 ']' 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3518878 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3518878 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3518878' 00:14:07.002 killing process with pid 3518878 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3518878 00:14:07.002 16:38:11 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3518878 00:14:07.570 16:38:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3518890 ]] 00:14:07.570 16:38:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3518890 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3518890 ']' 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3518890 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3518890 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3518890' 00:14:07.570 killing process with pid 3518890 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 3518890 00:14:07.570 16:38:11 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 3518890 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3518878 ]] 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3518878 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3518878 ']' 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3518878 00:14:07.829 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3518878) - No such process 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3518878 is not found' 00:14:07.829 Process with pid 3518878 is not found 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3518890 ]] 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3518890 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 3518890 ']' 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 3518890 00:14:07.829 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3518890) - No such process 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 3518890 is not found' 00:14:07.829 Process with pid 3518890 is not found 00:14:07.829 16:38:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:14:07.829 00:14:07.829 real 0m17.594s 00:14:07.829 user 0m30.393s 00:14:07.829 sys 0m6.650s 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.829 16:38:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:07.829 ************************************ 00:14:07.829 END TEST cpu_locks 00:14:07.829 ************************************ 00:14:07.829 00:14:07.829 real 0m44.227s 00:14:07.829 user 1m23.003s 00:14:07.829 sys 0m11.849s 00:14:07.829 16:38:12 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.829 16:38:12 event -- common/autotest_common.sh@10 -- # set +x 00:14:07.829 ************************************ 00:14:07.829 END TEST event 00:14:07.829 ************************************ 00:14:07.829 16:38:12 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:14:07.829 16:38:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:07.829 16:38:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.829 16:38:12 -- common/autotest_common.sh@10 -- # set +x 00:14:07.829 ************************************ 00:14:07.829 START TEST thread 00:14:07.829 ************************************ 00:14:07.829 16:38:12 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:14:08.088 * Looking for test storage... 00:14:08.088 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:08.088 16:38:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:08.088 16:38:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:08.088 16:38:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:08.088 16:38:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:14:08.088 16:38:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:14:08.088 16:38:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:14:08.088 16:38:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:14:08.088 16:38:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:14:08.088 16:38:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:14:08.088 16:38:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:14:08.088 16:38:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:08.088 16:38:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:14:08.088 16:38:12 thread -- scripts/common.sh@345 -- # : 1 00:14:08.088 16:38:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:08.088 16:38:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:08.088 16:38:12 thread -- scripts/common.sh@365 -- # decimal 1 00:14:08.088 16:38:12 thread -- scripts/common.sh@353 -- # local d=1 00:14:08.088 16:38:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:08.088 16:38:12 thread -- scripts/common.sh@355 -- # echo 1 00:14:08.088 16:38:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:14:08.088 16:38:12 thread -- scripts/common.sh@366 -- # decimal 2 00:14:08.088 16:38:12 thread -- scripts/common.sh@353 -- # local d=2 00:14:08.088 16:38:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:08.088 16:38:12 thread -- scripts/common.sh@355 -- # echo 2 00:14:08.088 16:38:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:14:08.088 16:38:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:08.088 16:38:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:08.088 16:38:12 thread -- scripts/common.sh@368 -- # return 0 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.088 --rc genhtml_branch_coverage=1 00:14:08.088 --rc genhtml_function_coverage=1 00:14:08.088 --rc genhtml_legend=1 00:14:08.088 --rc geninfo_all_blocks=1 00:14:08.088 --rc geninfo_unexecuted_blocks=1 00:14:08.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:08.088 ' 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.088 --rc genhtml_branch_coverage=1 00:14:08.088 --rc genhtml_function_coverage=1 00:14:08.088 --rc genhtml_legend=1 00:14:08.088 --rc geninfo_all_blocks=1 00:14:08.088 --rc geninfo_unexecuted_blocks=1 00:14:08.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:08.088 ' 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.088 --rc genhtml_branch_coverage=1 00:14:08.088 --rc genhtml_function_coverage=1 00:14:08.088 --rc genhtml_legend=1 00:14:08.088 --rc geninfo_all_blocks=1 00:14:08.088 --rc geninfo_unexecuted_blocks=1 00:14:08.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:08.088 ' 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:08.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:08.088 --rc genhtml_branch_coverage=1 00:14:08.088 --rc genhtml_function_coverage=1 00:14:08.088 --rc genhtml_legend=1 00:14:08.088 --rc geninfo_all_blocks=1 00:14:08.088 --rc geninfo_unexecuted_blocks=1 00:14:08.088 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:08.088 ' 00:14:08.088 16:38:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:08.088 16:38:12 thread -- common/autotest_common.sh@10 -- # set +x 00:14:08.088 ************************************ 00:14:08.088 START TEST thread_poller_perf 00:14:08.088 ************************************ 00:14:08.088 16:38:12 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:14:08.088 [2024-11-05 16:38:12.614686] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:08.088 [2024-11-05 16:38:12.614786] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519344 ] 00:14:08.347 [2024-11-05 16:38:12.741467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.347 [2024-11-05 16:38:12.797773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.347 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:14:09.282 [2024-11-05T15:38:13.865Z] ====================================== 00:14:09.282 [2024-11-05T15:38:13.865Z] busy:2308451218 (cyc) 00:14:09.282 [2024-11-05T15:38:13.865Z] total_run_count: 527000 00:14:09.282 [2024-11-05T15:38:13.865Z] tsc_hz: 2300000000 (cyc) 00:14:09.282 [2024-11-05T15:38:13.865Z] ====================================== 00:14:09.282 [2024-11-05T15:38:13.865Z] poller_cost: 4380 (cyc), 1904 (nsec) 00:14:09.282 00:14:09.282 real 0m1.257s 00:14:09.282 user 0m1.123s 00:14:09.282 sys 0m0.128s 00:14:09.282 16:38:13 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.282 16:38:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:14:09.282 ************************************ 00:14:09.282 END TEST thread_poller_perf 00:14:09.282 ************************************ 00:14:09.541 16:38:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:09.541 16:38:13 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:14:09.541 16:38:13 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.541 16:38:13 thread -- common/autotest_common.sh@10 -- # set +x 00:14:09.541 ************************************ 00:14:09.541 START TEST thread_poller_perf 00:14:09.541 ************************************ 00:14:09.541 16:38:13 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:14:09.541 [2024-11-05 16:38:13.926841] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:09.541 [2024-11-05 16:38:13.926884] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519546 ] 00:14:09.541 [2024-11-05 16:38:14.031744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.541 [2024-11-05 16:38:14.086786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.541 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:14:10.916 [2024-11-05T15:38:15.499Z] ====================================== 00:14:10.916 [2024-11-05T15:38:15.499Z] busy:2301857260 (cyc) 00:14:10.916 [2024-11-05T15:38:15.499Z] total_run_count: 8243000 00:14:10.916 [2024-11-05T15:38:15.499Z] tsc_hz: 2300000000 (cyc) 00:14:10.916 [2024-11-05T15:38:15.499Z] ====================================== 00:14:10.916 [2024-11-05T15:38:15.499Z] poller_cost: 279 (cyc), 121 (nsec) 00:14:10.916 00:14:10.916 real 0m1.218s 00:14:10.916 user 0m1.108s 00:14:10.916 sys 0m0.104s 00:14:10.916 16:38:15 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.916 16:38:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:14:10.916 ************************************ 00:14:10.916 END TEST thread_poller_perf 00:14:10.916 ************************************ 00:14:10.916 16:38:15 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:14:10.916 16:38:15 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:14:10.916 16:38:15 thread -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:10.916 16:38:15 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.916 16:38:15 thread -- common/autotest_common.sh@10 -- # set +x 00:14:10.916 ************************************ 00:14:10.916 START TEST thread_spdk_lock 00:14:10.916 ************************************ 00:14:10.916 16:38:15 thread.thread_spdk_lock -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:14:10.916 [2024-11-05 16:38:15.209159] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:10.916 [2024-11-05 16:38:15.209266] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519738 ] 00:14:10.916 [2024-11-05 16:38:15.333084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.916 [2024-11-05 16:38:15.390052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.916 [2024-11-05 16:38:15.390058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.482 [2024-11-05 16:38:15.895341] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:14:11.482 [2024-11-05 16:38:15.895387] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:14:11.482 [2024-11-05 16:38:15.895403] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x14d2c80 00:14:11.482 [2024-11-05 16:38:15.896348] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:14:11.482 [2024-11-05 16:38:15.896454] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:14:11.482 [2024-11-05 16:38:15.896480] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:14:11.482 Starting test contend 00:14:11.482 Worker Delay Wait us Hold us Total us 00:14:11.482 0 3 150391 192249 342641 00:14:11.482 1 5 79200 292665 371865 00:14:11.482 PASS test contend 00:14:11.482 Starting test hold_by_poller 00:14:11.482 PASS test hold_by_poller 00:14:11.482 Starting test hold_by_message 00:14:11.482 PASS test hold_by_message 00:14:11.482 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:14:11.482 100014 assertions passed 00:14:11.482 0 assertions failed 00:14:11.482 00:14:11.482 real 0m0.752s 00:14:11.482 user 0m1.126s 00:14:11.482 sys 0m0.127s 00:14:11.482 16:38:15 thread.thread_spdk_lock -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:11.482 16:38:15 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:14:11.482 ************************************ 00:14:11.482 END TEST thread_spdk_lock 00:14:11.482 ************************************ 00:14:11.482 00:14:11.482 real 0m3.615s 00:14:11.482 user 0m3.542s 00:14:11.482 sys 0m0.594s 00:14:11.482 16:38:15 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:11.482 16:38:15 thread -- common/autotest_common.sh@10 -- # set +x 00:14:11.482 ************************************ 00:14:11.482 END TEST thread 00:14:11.482 ************************************ 00:14:11.482 16:38:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:14:11.482 16:38:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:14:11.482 16:38:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:11.482 16:38:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:11.482 16:38:16 -- common/autotest_common.sh@10 -- # set +x 00:14:11.482 ************************************ 00:14:11.482 START TEST app_cmdline 00:14:11.482 ************************************ 00:14:11.482 16:38:16 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:14:11.741 * Looking for test storage... 00:14:11.741 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.741 16:38:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.741 --rc genhtml_branch_coverage=1 00:14:11.741 --rc genhtml_function_coverage=1 00:14:11.741 --rc genhtml_legend=1 00:14:11.741 --rc geninfo_all_blocks=1 00:14:11.741 --rc geninfo_unexecuted_blocks=1 00:14:11.741 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:11.741 ' 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.741 --rc genhtml_branch_coverage=1 00:14:11.741 --rc genhtml_function_coverage=1 00:14:11.741 --rc genhtml_legend=1 00:14:11.741 --rc geninfo_all_blocks=1 00:14:11.741 --rc geninfo_unexecuted_blocks=1 00:14:11.741 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:11.741 ' 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.741 --rc genhtml_branch_coverage=1 00:14:11.741 --rc genhtml_function_coverage=1 00:14:11.741 --rc genhtml_legend=1 00:14:11.741 --rc geninfo_all_blocks=1 00:14:11.741 --rc geninfo_unexecuted_blocks=1 00:14:11.741 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:11.741 ' 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:11.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.741 --rc genhtml_branch_coverage=1 00:14:11.741 --rc genhtml_function_coverage=1 00:14:11.741 --rc genhtml_legend=1 00:14:11.741 --rc geninfo_all_blocks=1 00:14:11.741 --rc geninfo_unexecuted_blocks=1 00:14:11.741 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:11.741 ' 00:14:11.741 16:38:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:14:11.741 16:38:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3519978 00:14:11.741 16:38:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3519978 00:14:11.741 16:38:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 3519978 ']' 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.741 16:38:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:11.741 [2024-11-05 16:38:16.274358] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:11.741 [2024-11-05 16:38:16.274429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3519978 ] 00:14:11.999 [2024-11-05 16:38:16.401530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.999 [2024-11-05 16:38:16.454894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:14:12.988 { 00:14:12.988 "version": "SPDK v25.01-pre git sha1 4c618f461", 00:14:12.988 "fields": { 00:14:12.988 "major": 25, 00:14:12.988 "minor": 1, 00:14:12.988 "patch": 0, 00:14:12.988 "suffix": "-pre", 00:14:12.988 "commit": "4c618f461" 00:14:12.988 } 00:14:12.988 } 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:14:12.988 16:38:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:12.988 16:38:17 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:14:12.989 16:38:17 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:14:13.271 request: 00:14:13.271 { 00:14:13.271 "method": "env_dpdk_get_mem_stats", 00:14:13.271 "req_id": 1 00:14:13.271 } 00:14:13.271 Got JSON-RPC error response 00:14:13.271 response: 00:14:13.271 { 00:14:13.271 "code": -32601, 00:14:13.271 "message": "Method not found" 00:14:13.271 } 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:13.271 16:38:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3519978 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 3519978 ']' 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 3519978 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3519978 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3519978' 00:14:13.271 killing process with pid 3519978 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@971 -- # kill 3519978 00:14:13.271 16:38:17 app_cmdline -- common/autotest_common.sh@976 -- # wait 3519978 00:14:13.861 00:14:13.861 real 0m2.149s 00:14:13.861 user 0m2.600s 00:14:13.861 sys 0m0.630s 00:14:13.861 16:38:18 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:13.861 16:38:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:14:13.861 ************************************ 00:14:13.861 END TEST app_cmdline 00:14:13.861 ************************************ 00:14:13.861 16:38:18 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:14:13.861 16:38:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:13.861 16:38:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:13.861 16:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:13.861 ************************************ 00:14:13.861 START TEST version 00:14:13.861 ************************************ 00:14:13.861 16:38:18 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:14:13.861 * Looking for test storage... 00:14:13.861 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:14:13.861 16:38:18 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:13.861 16:38:18 version -- common/autotest_common.sh@1691 -- # lcov --version 00:14:13.861 16:38:18 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:14.132 16:38:18 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:14.132 16:38:18 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.132 16:38:18 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.132 16:38:18 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.132 16:38:18 version -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.132 16:38:18 version -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.132 16:38:18 version -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.132 16:38:18 version -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.132 16:38:18 version -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.132 16:38:18 version -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.132 16:38:18 version -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.132 16:38:18 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.132 16:38:18 version -- scripts/common.sh@344 -- # case "$op" in 00:14:14.132 16:38:18 version -- scripts/common.sh@345 -- # : 1 00:14:14.132 16:38:18 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.132 16:38:18 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.132 16:38:18 version -- scripts/common.sh@365 -- # decimal 1 00:14:14.132 16:38:18 version -- scripts/common.sh@353 -- # local d=1 00:14:14.132 16:38:18 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.132 16:38:18 version -- scripts/common.sh@355 -- # echo 1 00:14:14.132 16:38:18 version -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.132 16:38:18 version -- scripts/common.sh@366 -- # decimal 2 00:14:14.132 16:38:18 version -- scripts/common.sh@353 -- # local d=2 00:14:14.132 16:38:18 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.132 16:38:18 version -- scripts/common.sh@355 -- # echo 2 00:14:14.132 16:38:18 version -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.132 16:38:18 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.132 16:38:18 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.132 16:38:18 version -- scripts/common.sh@368 -- # return 0 00:14:14.132 16:38:18 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.132 16:38:18 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:14.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.132 --rc genhtml_branch_coverage=1 00:14:14.132 --rc genhtml_function_coverage=1 00:14:14.132 --rc genhtml_legend=1 00:14:14.132 --rc geninfo_all_blocks=1 00:14:14.132 --rc geninfo_unexecuted_blocks=1 00:14:14.132 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.132 ' 00:14:14.132 16:38:18 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:14.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.133 --rc genhtml_branch_coverage=1 00:14:14.133 --rc genhtml_function_coverage=1 00:14:14.133 --rc genhtml_legend=1 00:14:14.133 --rc geninfo_all_blocks=1 00:14:14.133 --rc geninfo_unexecuted_blocks=1 00:14:14.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.133 ' 00:14:14.133 16:38:18 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:14.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.133 --rc genhtml_branch_coverage=1 00:14:14.133 --rc genhtml_function_coverage=1 00:14:14.133 --rc genhtml_legend=1 00:14:14.133 --rc geninfo_all_blocks=1 00:14:14.133 --rc geninfo_unexecuted_blocks=1 00:14:14.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.133 ' 00:14:14.133 16:38:18 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:14.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.133 --rc genhtml_branch_coverage=1 00:14:14.133 --rc genhtml_function_coverage=1 00:14:14.133 --rc genhtml_legend=1 00:14:14.133 --rc geninfo_all_blocks=1 00:14:14.133 --rc geninfo_unexecuted_blocks=1 00:14:14.133 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.133 ' 00:14:14.133 16:38:18 version -- app/version.sh@17 -- # get_header_version major 00:14:14.133 16:38:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # cut -f2 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # tr -d '"' 00:14:14.133 16:38:18 version -- app/version.sh@17 -- # major=25 00:14:14.133 16:38:18 version -- app/version.sh@18 -- # get_header_version minor 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # tr -d '"' 00:14:14.133 16:38:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # cut -f2 00:14:14.133 16:38:18 version -- app/version.sh@18 -- # minor=1 00:14:14.133 16:38:18 version -- app/version.sh@19 -- # get_header_version patch 00:14:14.133 16:38:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # cut -f2 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # tr -d '"' 00:14:14.133 16:38:18 version -- app/version.sh@19 -- # patch=0 00:14:14.133 16:38:18 version -- app/version.sh@20 -- # get_header_version suffix 00:14:14.133 16:38:18 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # cut -f2 00:14:14.133 16:38:18 version -- app/version.sh@14 -- # tr -d '"' 00:14:14.133 16:38:18 version -- app/version.sh@20 -- # suffix=-pre 00:14:14.133 16:38:18 version -- app/version.sh@22 -- # version=25.1 00:14:14.133 16:38:18 version -- app/version.sh@25 -- # (( patch != 0 )) 00:14:14.133 16:38:18 version -- app/version.sh@28 -- # version=25.1rc0 00:14:14.133 16:38:18 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:14:14.133 16:38:18 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:14:14.133 16:38:18 version -- app/version.sh@30 -- # py_version=25.1rc0 00:14:14.133 16:38:18 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:14:14.133 00:14:14.133 real 0m0.281s 00:14:14.133 user 0m0.161s 00:14:14.133 sys 0m0.164s 00:14:14.133 16:38:18 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.133 16:38:18 version -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 ************************************ 00:14:14.133 END TEST version 00:14:14.133 ************************************ 00:14:14.133 16:38:18 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@194 -- # uname -s 00:14:14.133 16:38:18 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@256 -- # timing_exit lib 00:14:14.133 16:38:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.133 16:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 16:38:18 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:14:14.133 16:38:18 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:14:14.133 16:38:18 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:14:14.133 16:38:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:14.133 16:38:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.133 16:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:14.133 ************************************ 00:14:14.133 START TEST llvm_fuzz 00:14:14.133 ************************************ 00:14:14.133 16:38:18 llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:14:14.133 * Looking for test storage... 00:14:14.393 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.393 16:38:18 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.393 --rc genhtml_branch_coverage=1 00:14:14.393 --rc genhtml_function_coverage=1 00:14:14.393 --rc genhtml_legend=1 00:14:14.393 --rc geninfo_all_blocks=1 00:14:14.393 --rc geninfo_unexecuted_blocks=1 00:14:14.393 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.393 ' 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.393 --rc genhtml_branch_coverage=1 00:14:14.393 --rc genhtml_function_coverage=1 00:14:14.393 --rc genhtml_legend=1 00:14:14.393 --rc geninfo_all_blocks=1 00:14:14.393 --rc geninfo_unexecuted_blocks=1 00:14:14.393 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.393 ' 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.393 --rc genhtml_branch_coverage=1 00:14:14.393 --rc genhtml_function_coverage=1 00:14:14.393 --rc genhtml_legend=1 00:14:14.393 --rc geninfo_all_blocks=1 00:14:14.393 --rc geninfo_unexecuted_blocks=1 00:14:14.393 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.393 ' 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.393 --rc genhtml_branch_coverage=1 00:14:14.393 --rc genhtml_function_coverage=1 00:14:14.393 --rc genhtml_legend=1 00:14:14.393 --rc geninfo_all_blocks=1 00:14:14.393 --rc geninfo_unexecuted_blocks=1 00:14:14.393 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.393 ' 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:14:14.393 16:38:18 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.393 16:38:18 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.393 ************************************ 00:14:14.393 START TEST nvmf_llvm_fuzz 00:14:14.393 ************************************ 00:14:14.393 16:38:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:14:14.393 * Looking for test storage... 00:14:14.393 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:14:14.393 16:38:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:14.393 16:38:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:14.393 16:38:18 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.656 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:14.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.657 --rc genhtml_branch_coverage=1 00:14:14.657 --rc genhtml_function_coverage=1 00:14:14.657 --rc genhtml_legend=1 00:14:14.657 --rc geninfo_all_blocks=1 00:14:14.657 --rc geninfo_unexecuted_blocks=1 00:14:14.657 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.657 ' 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:14.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.657 --rc genhtml_branch_coverage=1 00:14:14.657 --rc genhtml_function_coverage=1 00:14:14.657 --rc genhtml_legend=1 00:14:14.657 --rc geninfo_all_blocks=1 00:14:14.657 --rc geninfo_unexecuted_blocks=1 00:14:14.657 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.657 ' 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:14.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.657 --rc genhtml_branch_coverage=1 00:14:14.657 --rc genhtml_function_coverage=1 00:14:14.657 --rc genhtml_legend=1 00:14:14.657 --rc geninfo_all_blocks=1 00:14:14.657 --rc geninfo_unexecuted_blocks=1 00:14:14.657 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.657 ' 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:14.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.657 --rc genhtml_branch_coverage=1 00:14:14.657 --rc genhtml_function_coverage=1 00:14:14.657 --rc genhtml_legend=1 00:14:14.657 --rc geninfo_all_blocks=1 00:14:14.657 --rc geninfo_unexecuted_blocks=1 00:14:14.657 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.657 ' 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:14.657 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:14.658 #define SPDK_CONFIG_H 00:14:14.658 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:14.658 #define SPDK_CONFIG_APPS 1 00:14:14.658 #define SPDK_CONFIG_ARCH native 00:14:14.658 #undef SPDK_CONFIG_ASAN 00:14:14.658 #undef SPDK_CONFIG_AVAHI 00:14:14.658 #undef SPDK_CONFIG_CET 00:14:14.658 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:14.658 #define SPDK_CONFIG_COVERAGE 1 00:14:14.658 #define SPDK_CONFIG_CROSS_PREFIX 00:14:14.658 #undef SPDK_CONFIG_CRYPTO 00:14:14.658 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:14.658 #undef SPDK_CONFIG_CUSTOMOCF 00:14:14.658 #undef SPDK_CONFIG_DAOS 00:14:14.658 #define SPDK_CONFIG_DAOS_DIR 00:14:14.658 #define SPDK_CONFIG_DEBUG 1 00:14:14.658 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:14.658 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:14:14.658 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:14.658 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:14.658 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:14.658 #undef SPDK_CONFIG_DPDK_UADK 00:14:14.658 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:14:14.658 #define SPDK_CONFIG_EXAMPLES 1 00:14:14.658 #undef SPDK_CONFIG_FC 00:14:14.658 #define SPDK_CONFIG_FC_PATH 00:14:14.658 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:14.658 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:14.658 #define SPDK_CONFIG_FSDEV 1 00:14:14.658 #undef SPDK_CONFIG_FUSE 00:14:14.658 #define SPDK_CONFIG_FUZZER 1 00:14:14.658 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:14:14.658 #undef SPDK_CONFIG_GOLANG 00:14:14.658 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:14.658 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:14.658 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:14.658 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:14.658 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:14.658 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:14.658 #undef SPDK_CONFIG_HAVE_LZ4 00:14:14.658 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:14.658 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:14.658 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:14.658 #define SPDK_CONFIG_IDXD 1 00:14:14.658 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:14.658 #undef SPDK_CONFIG_IPSEC_MB 00:14:14.658 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:14.658 #define SPDK_CONFIG_ISAL 1 00:14:14.658 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:14.658 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:14.658 #define SPDK_CONFIG_LIBDIR 00:14:14.658 #undef SPDK_CONFIG_LTO 00:14:14.658 #define SPDK_CONFIG_MAX_LCORES 128 00:14:14.658 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:14.658 #define SPDK_CONFIG_NVME_CUSE 1 00:14:14.658 #undef SPDK_CONFIG_OCF 00:14:14.658 #define SPDK_CONFIG_OCF_PATH 00:14:14.658 #define SPDK_CONFIG_OPENSSL_PATH 00:14:14.658 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:14.658 #define SPDK_CONFIG_PGO_DIR 00:14:14.658 #undef SPDK_CONFIG_PGO_USE 00:14:14.658 #define SPDK_CONFIG_PREFIX /usr/local 00:14:14.658 #undef SPDK_CONFIG_RAID5F 00:14:14.658 #undef SPDK_CONFIG_RBD 00:14:14.658 #define SPDK_CONFIG_RDMA 1 00:14:14.658 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:14.658 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:14.658 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:14.658 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:14.658 #undef SPDK_CONFIG_SHARED 00:14:14.658 #undef SPDK_CONFIG_SMA 00:14:14.658 #define SPDK_CONFIG_TESTS 1 00:14:14.658 #undef SPDK_CONFIG_TSAN 00:14:14.658 #define SPDK_CONFIG_UBLK 1 00:14:14.658 #define SPDK_CONFIG_UBSAN 1 00:14:14.658 #undef SPDK_CONFIG_UNIT_TESTS 00:14:14.658 #undef SPDK_CONFIG_URING 00:14:14.658 #define SPDK_CONFIG_URING_PATH 00:14:14.658 #undef SPDK_CONFIG_URING_ZNS 00:14:14.658 #undef SPDK_CONFIG_USDT 00:14:14.658 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:14.658 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:14.658 #define SPDK_CONFIG_VFIO_USER 1 00:14:14.658 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:14.658 #define SPDK_CONFIG_VHOST 1 00:14:14.658 #define SPDK_CONFIG_VIRTIO 1 00:14:14.658 #undef SPDK_CONFIG_VTUNE 00:14:14.658 #define SPDK_CONFIG_VTUNE_DIR 00:14:14.658 #define SPDK_CONFIG_WERROR 1 00:14:14.658 #define SPDK_CONFIG_WPDK_DIR 00:14:14.658 #undef SPDK_CONFIG_XNVME 00:14:14.658 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:14:14.658 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:14.659 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3520513 ]] 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3520513 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.OplQgV 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.OplQgV/tests/nvmf /tmp/spdk.OplQgV 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.660 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=81427275776 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500290560 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=13073014784 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245381632 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18893955072 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900058112 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=6103040 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=46175830016 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1074315264 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:14:14.661 * Looking for test storage... 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=81427275776 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=15287607296 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:14:14.661 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.661 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:14.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.662 --rc genhtml_branch_coverage=1 00:14:14.662 --rc genhtml_function_coverage=1 00:14:14.662 --rc genhtml_legend=1 00:14:14.662 --rc geninfo_all_blocks=1 00:14:14.662 --rc geninfo_unexecuted_blocks=1 00:14:14.662 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.662 ' 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:14.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.662 --rc genhtml_branch_coverage=1 00:14:14.662 --rc genhtml_function_coverage=1 00:14:14.662 --rc genhtml_legend=1 00:14:14.662 --rc geninfo_all_blocks=1 00:14:14.662 --rc geninfo_unexecuted_blocks=1 00:14:14.662 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.662 ' 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:14.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.662 --rc genhtml_branch_coverage=1 00:14:14.662 --rc genhtml_function_coverage=1 00:14:14.662 --rc genhtml_legend=1 00:14:14.662 --rc geninfo_all_blocks=1 00:14:14.662 --rc geninfo_unexecuted_blocks=1 00:14:14.662 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.662 ' 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:14.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.662 --rc genhtml_branch_coverage=1 00:14:14.662 --rc genhtml_function_coverage=1 00:14:14.662 --rc genhtml_legend=1 00:14:14.662 --rc geninfo_all_blocks=1 00:14:14.662 --rc geninfo_unexecuted_blocks=1 00:14:14.662 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:14:14.662 ' 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:14.662 16:38:19 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:14:14.921 [2024-11-05 16:38:19.248326] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:14.921 [2024-11-05 16:38:19.248397] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520569 ] 00:14:15.180 [2024-11-05 16:38:19.515689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.180 [2024-11-05 16:38:19.564264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.180 [2024-11-05 16:38:19.628351] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.180 [2024-11-05 16:38:19.644606] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:14:15.180 INFO: Running with entropic power schedule (0xFF, 100). 00:14:15.180 INFO: Seed: 2937279108 00:14:15.180 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:15.181 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:15.181 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:14:15.181 INFO: A corpus is not provided, starting from an empty corpus 00:14:15.181 #2 INITED exec/s: 0 rss: 66Mb 00:14:15.181 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:15.181 This may also happen if the target rejected all inputs we tried so far 00:14:15.181 [2024-11-05 16:38:19.715339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:15.181 [2024-11-05 16:38:19.715393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:15.704 NEW_FUNC[1/714]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:14:15.704 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:15.704 #29 NEW cov: 12172 ft: 12167 corp: 2/110b lim: 320 exec/s: 0 rss: 73Mb L: 109/109 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:14:15.704 [2024-11-05 16:38:20.216622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:00004141 00:14:15.704 [2024-11-05 16:38:20.216682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:15.704 #35 NEW cov: 12285 ft: 12868 corp: 3/183b lim: 320 exec/s: 0 rss: 73Mb L: 73/109 MS: 1 EraseBytes- 00:14:15.963 [2024-11-05 16:38:20.316831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (28) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:15.963 [2024-11-05 16:38:20.316871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:15.963 #38 NEW cov: 12294 ft: 13118 corp: 4/278b lim: 320 exec/s: 0 rss: 73Mb L: 95/109 MS: 3 ChangeByte-InsertRepeatedBytes-InsertRepeatedBytes- 00:14:15.963 [2024-11-05 16:38:20.376927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:15.963 [2024-11-05 16:38:20.376966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:15.963 #39 NEW cov: 12379 ft: 13402 corp: 5/387b lim: 320 exec/s: 0 rss: 73Mb L: 109/109 MS: 1 ChangeBit- 00:14:15.963 [2024-11-05 16:38:20.437224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:00004141 00:14:15.963 [2024-11-05 16:38:20.437261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:15.963 #45 NEW cov: 12379 ft: 13520 corp: 6/460b lim: 320 exec/s: 0 rss: 73Mb L: 73/109 MS: 1 ChangeBinInt- 00:14:15.963 [2024-11-05 16:38:20.527438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:15.963 [2024-11-05 16:38:20.527475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.223 #46 NEW cov: 12379 ft: 13651 corp: 7/569b lim: 320 exec/s: 0 rss: 73Mb L: 109/109 MS: 1 ShuffleBytes- 00:14:16.223 [2024-11-05 16:38:20.587763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:14:16.223 [2024-11-05 16:38:20.587800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.223 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:16.223 #47 NEW cov: 12402 ft: 13794 corp: 8/670b lim: 320 exec/s: 0 rss: 73Mb L: 101/109 MS: 1 InsertRepeatedBytes- 00:14:16.223 [2024-11-05 16:38:20.678074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:14:16.223 [2024-11-05 16:38:20.678112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.223 NEW_FUNC[1/2]: 0x152c358 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2213 00:14:16.223 NEW_FUNC[2/2]: 0x1961fa8 in nvme_get_sgl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:159 00:14:16.223 #48 NEW cov: 12454 ft: 13909 corp: 9/735b lim: 320 exec/s: 48 rss: 73Mb L: 65/109 MS: 1 InsertRepeatedBytes- 00:14:16.223 [2024-11-05 16:38:20.749538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (28) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:14:16.223 [2024-11-05 16:38:20.749575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.223 [2024-11-05 16:38:20.749673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:8c8c8c8c cdw11:8c8c8c8c 00:14:16.223 [2024-11-05 16:38:20.749695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:16.223 [2024-11-05 16:38:20.749799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (8c) qid:0 cid:6 nsid:8c8c8c8c cdw10:8c8c8c8c cdw11:8c8c8c8c 00:14:16.223 [2024-11-05 16:38:20.749822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:16.483 #49 NEW cov: 12454 ft: 14158 corp: 10/939b lim: 320 exec/s: 49 rss: 74Mb L: 204/204 MS: 1 InsertRepeatedBytes- 00:14:16.483 [2024-11-05 16:38:20.838837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:00004141 00:14:16.483 [2024-11-05 16:38:20.838876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.483 #50 NEW cov: 12454 ft: 14188 corp: 11/1004b lim: 320 exec/s: 50 rss: 74Mb L: 65/204 MS: 1 EraseBytes- 00:14:16.483 [2024-11-05 16:38:20.898960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:14:16.483 [2024-11-05 16:38:20.898998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.483 #51 NEW cov: 12454 ft: 14242 corp: 12/1105b lim: 320 exec/s: 51 rss: 74Mb L: 101/204 MS: 1 ShuffleBytes- 00:14:16.483 [2024-11-05 16:38:20.989424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:14:16.483 [2024-11-05 16:38:20.989464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.483 #52 NEW cov: 12454 ft: 14248 corp: 13/1206b lim: 320 exec/s: 52 rss: 74Mb L: 101/204 MS: 1 ChangeBit- 00:14:16.742 [2024-11-05 16:38:21.079778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:14:16.742 [2024-11-05 16:38:21.079817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.742 #58 NEW cov: 12454 ft: 14300 corp: 14/1308b lim: 320 exec/s: 58 rss: 74Mb L: 102/204 MS: 1 InsertByte- 00:14:16.742 [2024-11-05 16:38:21.139917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:00004141 00:14:16.742 [2024-11-05 16:38:21.139954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.742 #59 NEW cov: 12454 ft: 14327 corp: 15/1389b lim: 320 exec/s: 59 rss: 74Mb L: 81/204 MS: 1 CopyPart- 00:14:16.742 [2024-11-05 16:38:21.200283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:00004141 00:14:16.742 [2024-11-05 16:38:21.200325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.742 #60 NEW cov: 12454 ft: 14389 corp: 16/1463b lim: 320 exec/s: 60 rss: 74Mb L: 74/204 MS: 1 InsertByte- 00:14:16.742 [2024-11-05 16:38:21.260459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:16.742 [2024-11-05 16:38:21.260496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:16.742 #61 NEW cov: 12454 ft: 14413 corp: 17/1572b lim: 320 exec/s: 61 rss: 74Mb L: 109/204 MS: 1 ChangeBit- 00:14:16.742 [2024-11-05 16:38:21.320707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:16.742 [2024-11-05 16:38:21.320748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.001 #62 NEW cov: 12454 ft: 14462 corp: 18/1654b lim: 320 exec/s: 62 rss: 74Mb L: 82/204 MS: 1 CMP- DE: "\354/\375k%\237:\000"- 00:14:17.001 [2024-11-05 16:38:21.411145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0000ffff 00:14:17.001 [2024-11-05 16:38:21.411182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.001 #63 NEW cov: 12454 ft: 14468 corp: 19/1728b lim: 320 exec/s: 63 rss: 74Mb L: 74/204 MS: 1 CrossOver- 00:14:17.001 [2024-11-05 16:38:21.471986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:17.001 [2024-11-05 16:38:21.472023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.001 [2024-11-05 16:38:21.472126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (41) qid:0 cid:5 nsid:4141 cdw10:66666666 cdw11:66666666 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:17.002 [2024-11-05 16:38:21.472148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:17.002 [2024-11-05 16:38:21.472258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (66) qid:0 cid:6 nsid:66666666 cdw10:66666666 cdw11:66666666 SGL TRANSPORT DATA BLOCK TRANSPORT 0x6666666666666666 00:14:17.002 [2024-11-05 16:38:21.472279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:17.002 NEW_FUNC[1/1]: 0x1962b18 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:14:17.002 #64 NEW cov: 12467 ft: 14847 corp: 20/1964b lim: 320 exec/s: 64 rss: 74Mb L: 236/236 MS: 1 InsertRepeatedBytes- 00:14:17.002 [2024-11-05 16:38:21.541598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:0000ffff 00:14:17.002 [2024-11-05 16:38:21.541635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.261 #65 NEW cov: 12467 ft: 14889 corp: 21/2038b lim: 320 exec/s: 65 rss: 74Mb L: 74/236 MS: 1 ChangeBit- 00:14:17.261 [2024-11-05 16:38:21.631950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:41414141 cdw11:41414141 00:14:17.261 [2024-11-05 16:38:21.631986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.261 #66 NEW cov: 12467 ft: 14897 corp: 22/2155b lim: 320 exec/s: 66 rss: 74Mb L: 117/236 MS: 1 PersAutoDict- DE: "\354/\375k%\237:\000"- 00:14:17.261 [2024-11-05 16:38:21.692198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:c0c0c0c0 cdw11:c0c0c0c0 00:14:17.261 [2024-11-05 16:38:21.692236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.261 #67 NEW cov: 12467 ft: 14919 corp: 23/2256b lim: 320 exec/s: 33 rss: 74Mb L: 101/236 MS: 1 ChangeBit- 00:14:17.261 #67 DONE cov: 12467 ft: 14919 corp: 23/2256b lim: 320 exec/s: 33 rss: 74Mb 00:14:17.261 ###### Recommended dictionary. ###### 00:14:17.261 "\354/\375k%\237:\000" # Uses: 1 00:14:17.261 ###### End of recommended dictionary. ###### 00:14:17.261 Done 67 runs in 2 second(s) 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:17.261 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:17.520 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:17.521 16:38:21 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:14:17.521 [2024-11-05 16:38:21.865473] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:17.521 [2024-11-05 16:38:21.865528] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3520922 ] 00:14:17.780 [2024-11-05 16:38:22.115676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.780 [2024-11-05 16:38:22.163628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.780 [2024-11-05 16:38:22.227707] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.780 [2024-11-05 16:38:22.243948] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:14:17.780 INFO: Running with entropic power schedule (0xFF, 100). 00:14:17.780 INFO: Seed: 1243311809 00:14:17.780 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:17.780 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:17.780 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:14:17.780 INFO: A corpus is not provided, starting from an empty corpus 00:14:17.780 #2 INITED exec/s: 0 rss: 67Mb 00:14:17.780 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:17.780 This may also happen if the target rejected all inputs we tried so far 00:14:17.780 [2024-11-05 16:38:22.321075] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:17.780 [2024-11-05 16:38:22.321369] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:17.780 [2024-11-05 16:38:22.321656] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:17.780 [2024-11-05 16:38:22.322213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.780 [2024-11-05 16:38:22.322266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:17.780 [2024-11-05 16:38:22.322373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.780 [2024-11-05 16:38:22.322398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:17.780 [2024-11-05 16:38:22.322505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:17.780 [2024-11-05 16:38:22.322527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:18.298 NEW_FUNC[1/716]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:14:18.298 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:18.298 #3 NEW cov: 12254 ft: 12250 corp: 2/20b lim: 30 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:14:18.298 [2024-11-05 16:38:22.822252] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.298 [2024-11-05 16:38:22.822554] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.298 [2024-11-05 16:38:22.822827] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.298 [2024-11-05 16:38:22.823330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.298 [2024-11-05 16:38:22.823382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.298 [2024-11-05 16:38:22.823484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.298 [2024-11-05 16:38:22.823506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:18.298 [2024-11-05 16:38:22.823609] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.298 [2024-11-05 16:38:22.823630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:18.558 #4 NEW cov: 12384 ft: 12923 corp: 3/39b lim: 30 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ShuffleBytes- 00:14:18.558 [2024-11-05 16:38:22.922934] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:22.923216] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:22.923498] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:22.923765] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:22.924271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.558 [2024-11-05 16:38:22.924311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.558 [2024-11-05 16:38:22.924417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.558 [2024-11-05 16:38:22.924439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:18.558 [2024-11-05 16:38:22.924537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.558 [2024-11-05 16:38:22.924559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:18.558 [2024-11-05 16:38:22.924656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.558 [2024-11-05 16:38:22.924678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:18.558 #5 NEW cov: 12390 ft: 13690 corp: 4/64b lim: 30 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:14:18.558 [2024-11-05 16:38:22.993314] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:22.993607] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:22.994125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.558 [2024-11-05 16:38:22.994165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.558 [2024-11-05 16:38:22.994262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.558 [2024-11-05 16:38:22.994284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:18.558 #11 NEW cov: 12475 ft: 14161 corp: 5/80b lim: 30 exec/s: 0 rss: 74Mb L: 16/25 MS: 1 EraseBytes- 00:14:18.558 [2024-11-05 16:38:23.093659] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:23.093952] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:23.094231] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.558 [2024-11-05 16:38:23.094749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.559 [2024-11-05 16:38:23.094789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.559 [2024-11-05 16:38:23.094880] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.559 [2024-11-05 16:38:23.094902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:18.559 [2024-11-05 16:38:23.094998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.559 [2024-11-05 16:38:23.095022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:18.818 #12 NEW cov: 12475 ft: 14241 corp: 6/99b lim: 30 exec/s: 0 rss: 74Mb L: 19/25 MS: 1 CopyPart- 00:14:18.818 [2024-11-05 16:38:23.183850] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:14:18.818 [2024-11-05 16:38:23.184363] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.818 [2024-11-05 16:38:23.184402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.818 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:18.818 #16 NEW cov: 12498 ft: 14606 corp: 7/109b lim: 30 exec/s: 0 rss: 74Mb L: 10/25 MS: 4 InsertByte-EraseBytes-ChangeBit-InsertRepeatedBytes- 00:14:18.818 [2024-11-05 16:38:23.254026] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.818 [2024-11-05 16:38:23.254530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.818 [2024-11-05 16:38:23.254568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.818 #17 NEW cov: 12498 ft: 14716 corp: 8/120b lim: 30 exec/s: 17 rss: 74Mb L: 11/25 MS: 1 EraseBytes- 00:14:18.818 [2024-11-05 16:38:23.324314] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:18.818 [2024-11-05 16:38:23.324822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:18.818 [2024-11-05 16:38:23.324860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:18.818 #18 NEW cov: 12498 ft: 14756 corp: 9/128b lim: 30 exec/s: 18 rss: 74Mb L: 8/25 MS: 1 EraseBytes- 00:14:19.078 [2024-11-05 16:38:23.414737] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.415009] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.415285] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2db 00:14:19.078 [2024-11-05 16:38:23.415778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.415817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.415923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.415947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.416056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.416077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.078 #19 NEW cov: 12498 ft: 14786 corp: 10/147b lim: 30 exec/s: 19 rss: 74Mb L: 19/25 MS: 1 ChangeByte- 00:14:19.078 [2024-11-05 16:38:23.474689] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.475211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.475248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.078 #20 NEW cov: 12498 ft: 14886 corp: 11/155b lim: 30 exec/s: 20 rss: 74Mb L: 8/25 MS: 1 ChangeByte- 00:14:19.078 [2024-11-05 16:38:23.565278] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.565571] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.565847] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.566374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.566412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.566517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202c3 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.566540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.566642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.566666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.078 #21 NEW cov: 12498 ft: 14906 corp: 12/174b lim: 30 exec/s: 21 rss: 74Mb L: 19/25 MS: 1 ChangeByte- 00:14:19.078 [2024-11-05 16:38:23.625542] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.625836] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.626107] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.626380] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.078 [2024-11-05 16:38:23.626934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.626972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.627074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d25b02d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.627097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.627202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.627223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.078 [2024-11-05 16:38:23.627323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.078 [2024-11-05 16:38:23.627344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:19.337 #22 NEW cov: 12498 ft: 14946 corp: 13/199b lim: 30 exec/s: 22 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:14:19.337 [2024-11-05 16:38:23.695734] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:14:19.337 [2024-11-05 16:38:23.696010] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:14:19.337 [2024-11-05 16:38:23.696493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.696530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.337 [2024-11-05 16:38:23.696623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.696647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.337 #23 NEW cov: 12521 ft: 15002 corp: 14/213b lim: 30 exec/s: 23 rss: 74Mb L: 14/25 MS: 1 CMP- DE: "\001\000@\000"- 00:14:19.337 [2024-11-05 16:38:23.796089] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262144) > buf size (4096) 00:14:19.337 [2024-11-05 16:38:23.796386] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffd2 00:14:19.337 [2024-11-05 16:38:23.796873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.796912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.337 [2024-11-05 16:38:23.797013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.797042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.337 #24 NEW cov: 12521 ft: 15049 corp: 15/227b lim: 30 exec/s: 24 rss: 74Mb L: 14/25 MS: 1 CrossOver- 00:14:19.337 [2024-11-05 16:38:23.886403] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.337 [2024-11-05 16:38:23.886701] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1001516) > buf size (4096) 00:14:19.337 [2024-11-05 16:38:23.886989] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:14:19.337 [2024-11-05 16:38:23.887473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.887511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.337 [2024-11-05 16:38:23.887610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d20a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.887633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.337 [2024-11-05 16:38:23.887729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:400083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.337 [2024-11-05 16:38:23.887751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.596 #25 NEW cov: 12521 ft: 15074 corp: 16/249b lim: 30 exec/s: 25 rss: 74Mb L: 22/25 MS: 1 CrossOver- 00:14:19.596 [2024-11-05 16:38:23.976860] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:14:19.596 [2024-11-05 16:38:23.977147] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:14:19.596 [2024-11-05 16:38:23.977444] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2bf 00:14:19.596 [2024-11-05 16:38:23.977727] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:14:19.596 [2024-11-05 16:38:23.978233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:23.978271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.596 [2024-11-05 16:38:23.978370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:01000040 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:23.978393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.596 [2024-11-05 16:38:23.978489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff02ff cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:23.978510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.596 [2024-11-05 16:38:23.978610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:400083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:23.978631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:19.596 #26 NEW cov: 12521 ft: 15083 corp: 17/277b lim: 30 exec/s: 26 rss: 74Mb L: 28/28 MS: 1 CopyPart- 00:14:19.596 [2024-11-05 16:38:24.077091] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.596 [2024-11-05 16:38:24.077362] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.596 [2024-11-05 16:38:24.077627] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.596 [2024-11-05 16:38:24.078129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:24.078167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.596 [2024-11-05 16:38:24.078266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f2d202c3 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:24.078288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.596 [2024-11-05 16:38:24.078380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.596 [2024-11-05 16:38:24.078403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.596 #27 NEW cov: 12521 ft: 15097 corp: 18/296b lim: 30 exec/s: 27 rss: 74Mb L: 19/28 MS: 1 ChangeBit- 00:14:19.596 [2024-11-05 16:38:24.167300] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xd2 00:14:19.596 [2024-11-05 16:38:24.167801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.597 [2024-11-05 16:38:24.167840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.856 #32 NEW cov: 12521 ft: 15180 corp: 19/302b lim: 30 exec/s: 32 rss: 74Mb L: 6/28 MS: 5 ChangeBit-InsertByte-CopyPart-CrossOver-PersAutoDict- DE: "\001\000@\000"- 00:14:19.856 [2024-11-05 16:38:24.227744] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.856 [2024-11-05 16:38:24.228025] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d20a 00:14:19.856 [2024-11-05 16:38:24.228498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.856 [2024-11-05 16:38:24.228537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.856 [2024-11-05 16:38:24.228637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d2d202f8 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.856 [2024-11-05 16:38:24.228659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.856 #33 NEW cov: 12521 ft: 15202 corp: 20/314b lim: 30 exec/s: 33 rss: 74Mb L: 12/28 MS: 1 InsertByte- 00:14:19.856 [2024-11-05 16:38:24.298058] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000d2d2 00:14:19.856 [2024-11-05 16:38:24.298360] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (1001516) > buf size (4096) 00:14:19.856 [2024-11-05 16:38:24.298640] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:14:19.856 [2024-11-05 16:38:24.299172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:d2d202d2 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.856 [2024-11-05 16:38:24.299209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:19.856 [2024-11-05 16:38:24.299302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:d20a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.856 [2024-11-05 16:38:24.299324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:19.856 [2024-11-05 16:38:24.299422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:400083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.856 [2024-11-05 16:38:24.299443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:19.856 #34 NEW cov: 12521 ft: 15241 corp: 21/335b lim: 30 exec/s: 17 rss: 75Mb L: 21/28 MS: 1 EraseBytes- 00:14:19.856 #34 DONE cov: 12521 ft: 15241 corp: 21/335b lim: 30 exec/s: 17 rss: 75Mb 00:14:19.856 ###### Recommended dictionary. ###### 00:14:19.856 "\001\000@\000" # Uses: 1 00:14:19.856 ###### End of recommended dictionary. ###### 00:14:19.856 Done 34 runs in 2 second(s) 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:20.116 16:38:24 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:14:20.116 [2024-11-05 16:38:24.501506] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:20.116 [2024-11-05 16:38:24.501560] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3521285 ] 00:14:20.375 [2024-11-05 16:38:24.753037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.375 [2024-11-05 16:38:24.801532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.375 [2024-11-05 16:38:24.865535] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.375 [2024-11-05 16:38:24.881772] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:14:20.375 INFO: Running with entropic power schedule (0xFF, 100). 00:14:20.375 INFO: Seed: 3881310191 00:14:20.375 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:20.375 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:20.375 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:14:20.375 INFO: A corpus is not provided, starting from an empty corpus 00:14:20.375 #2 INITED exec/s: 0 rss: 66Mb 00:14:20.375 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:20.375 This may also happen if the target rejected all inputs we tried so far 00:14:20.375 [2024-11-05 16:38:24.927402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:bbbb000a cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.375 [2024-11-05 16:38:24.927432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:20.894 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:14:20.894 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:20.894 #6 NEW cov: 12226 ft: 12223 corp: 2/14b lim: 35 exec/s: 0 rss: 74Mb L: 13/13 MS: 4 ShuffleBytes-CopyPart-EraseBytes-InsertRepeatedBytes- 00:14:20.894 [2024-11-05 16:38:25.388830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.894 [2024-11-05 16:38:25.388868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:20.894 [2024-11-05 16:38:25.388942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:bb000abb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.894 [2024-11-05 16:38:25.388958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:20.894 NEW_FUNC[1/1]: 0x1f778c8 in spdk_thread_get_from_ctx /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:820 00:14:20.894 #9 NEW cov: 12341 ft: 13084 corp: 3/29b lim: 35 exec/s: 0 rss: 74Mb L: 15/15 MS: 3 InsertByte-ChangeByte-CrossOver- 00:14:20.894 [2024-11-05 16:38:25.439188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.894 [2024-11-05 16:38:25.439216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:20.894 [2024-11-05 16:38:25.439276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.894 [2024-11-05 16:38:25.439290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:20.894 [2024-11-05 16:38:25.439348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.894 [2024-11-05 16:38:25.439363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:20.894 [2024-11-05 16:38:25.439420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:20.894 [2024-11-05 16:38:25.439435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.154 #10 NEW cov: 12347 ft: 13931 corp: 4/57b lim: 35 exec/s: 0 rss: 74Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:14:21.154 [2024-11-05 16:38:25.499148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.499174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.499249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.499264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.499321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.499338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.154 #11 NEW cov: 12432 ft: 14416 corp: 5/83b lim: 35 exec/s: 0 rss: 74Mb L: 26/28 MS: 1 InsertRepeatedBytes- 00:14:21.154 [2024-11-05 16:38:25.559157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.559183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.559259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:bb000abb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.559274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.154 #12 NEW cov: 12432 ft: 14464 corp: 6/98b lim: 35 exec/s: 0 rss: 74Mb L: 15/28 MS: 1 ChangeBit- 00:14:21.154 [2024-11-05 16:38:25.599611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:0a00bbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.599637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.599697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.599719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.599779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.599794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.599855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0abb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.599870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.154 #13 NEW cov: 12432 ft: 14548 corp: 7/128b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 CrossOver- 00:14:21.154 [2024-11-05 16:38:25.659750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:0a00bbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.659776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.659851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:bb00bbdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.659866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.659925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.659940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.660000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0abb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.660014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.154 #14 NEW cov: 12432 ft: 14596 corp: 8/158b lim: 35 exec/s: 0 rss: 74Mb L: 30/30 MS: 1 ChangeByte- 00:14:21.154 [2024-11-05 16:38:25.719856] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.154 [2024-11-05 16:38:25.720144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.720171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.720229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.720244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.154 [2024-11-05 16:38:25.720303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.154 [2024-11-05 16:38:25.720316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.155 [2024-11-05 16:38:25.720375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.155 [2024-11-05 16:38:25.720389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.155 [2024-11-05 16:38:25.720446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.155 [2024-11-05 16:38:25.720463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:21.414 #20 NEW cov: 12443 ft: 14744 corp: 9/193b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:14:21.414 [2024-11-05 16:38:25.780030] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.414 [2024-11-05 16:38:25.780303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.780329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.780386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.780400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.780457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.780471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.780529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.780543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.780601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.780617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:21.414 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:21.414 #21 NEW cov: 12466 ft: 14800 corp: 10/228b lim: 35 exec/s: 0 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:14:21.414 [2024-11-05 16:38:25.840317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:0a00bbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.840343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.840421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:bb001e00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.840437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.840497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.840510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.840569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0abb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.840583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.414 #22 NEW cov: 12466 ft: 14855 corp: 11/258b lim: 35 exec/s: 0 rss: 75Mb L: 30/35 MS: 1 ChangeBinInt- 00:14:21.414 [2024-11-05 16:38:25.880261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.880287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.880344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.880359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.880420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bbbb00bb cdw11:bb0025bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.880433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.414 #23 NEW cov: 12466 ft: 14893 corp: 12/284b lim: 35 exec/s: 23 rss: 75Mb L: 26/35 MS: 1 ChangeByte- 00:14:21.414 [2024-11-05 16:38:25.940132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.940158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.414 #24 NEW cov: 12466 ft: 14916 corp: 13/297b lim: 35 exec/s: 24 rss: 75Mb L: 13/35 MS: 1 EraseBytes- 00:14:21.414 [2024-11-05 16:38:25.980695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:0a00bbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.980726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.980802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:c400bbdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.980817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.980877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.980891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.414 [2024-11-05 16:38:25.980949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0abb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.414 [2024-11-05 16:38:25.980964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.674 #25 NEW cov: 12466 ft: 14931 corp: 14/327b lim: 35 exec/s: 25 rss: 75Mb L: 30/35 MS: 1 ChangeBinInt- 00:14:21.674 [2024-11-05 16:38:26.040893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:0a00bbff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.040922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.040986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:c400bbdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.041002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.041060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bb0a00bb cdw11:bb00bb0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.041075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.041134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.041149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.674 #26 NEW cov: 12466 ft: 14944 corp: 15/357b lim: 35 exec/s: 26 rss: 75Mb L: 30/35 MS: 1 ShuffleBytes- 00:14:21.674 [2024-11-05 16:38:26.100984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.101012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.101066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.101081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.101133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0abb00d8 cdw11:2500bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.101148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.101206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.101219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.674 #27 NEW cov: 12466 ft: 14968 corp: 16/385b lim: 35 exec/s: 27 rss: 75Mb L: 28/35 MS: 1 CopyPart- 00:14:21.674 [2024-11-05 16:38:26.161165] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.674 [2024-11-05 16:38:26.161448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.161475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.161533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.161548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.161606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:bb002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.161623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.161680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a0024 cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.161694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.161747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.161763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:21.674 #28 NEW cov: 12466 ft: 15026 corp: 17/420b lim: 35 exec/s: 28 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:14:21.674 [2024-11-05 16:38:26.201412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:0a00bbfd SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.201438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.201497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:bbbb00bb cdw11:c400bbdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.201511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.201569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.201583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.201642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0abb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.201656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.674 #29 NEW cov: 12466 ft: 15056 corp: 18/450b lim: 35 exec/s: 29 rss: 75Mb L: 30/35 MS: 1 ChangeBit- 00:14:21.674 [2024-11-05 16:38:26.241514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.241541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.241602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.241617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.241675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0abb00d8 cdw11:2500bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.241690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.674 [2024-11-05 16:38:26.241744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.674 [2024-11-05 16:38:26.241759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.933 #30 NEW cov: 12466 ft: 15092 corp: 19/478b lim: 35 exec/s: 30 rss: 75Mb L: 28/35 MS: 1 ShuffleBytes- 00:14:21.933 [2024-11-05 16:38:26.301632] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.933 [2024-11-05 16:38:26.301908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.933 [2024-11-05 16:38:26.301939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.933 [2024-11-05 16:38:26.301997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.933 [2024-11-05 16:38:26.302011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.933 [2024-11-05 16:38:26.302070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:2400242c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.933 [2024-11-05 16:38:26.302084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.933 [2024-11-05 16:38:26.302143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.302157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.302215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.302231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:21.934 #31 NEW cov: 12466 ft: 15099 corp: 20/513b lim: 35 exec/s: 31 rss: 75Mb L: 35/35 MS: 1 ChangeBit- 00:14:21.934 [2024-11-05 16:38:26.361701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.361734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.361811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.361826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.361892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:bbbb00bb cdw11:bb0025bb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.361906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.934 #32 NEW cov: 12466 ft: 15105 corp: 21/539b lim: 35 exec/s: 32 rss: 75Mb L: 26/35 MS: 1 ShuffleBytes- 00:14:21.934 [2024-11-05 16:38:26.401883] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.934 [2024-11-05 16:38:26.402150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.402176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.402234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.402248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.402307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.402322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.402379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.402395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.402454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.402470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:21.934 #33 NEW cov: 12466 ft: 15109 corp: 22/574b lim: 35 exec/s: 33 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:14:21.934 [2024-11-05 16:38:26.442094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:d8d800d8 cdw11:d80024d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.442120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.442180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d800d8 cdw11:d800d8d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.442195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.442252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:0abb00d8 cdw11:2500bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.442266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.442321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bbbb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.442334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:21.934 #34 NEW cov: 12466 ft: 15157 corp: 23/602b lim: 35 exec/s: 34 rss: 75Mb L: 28/35 MS: 1 ChangeByte- 00:14:21.934 [2024-11-05 16:38:26.501571] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.934 [2024-11-05 16:38:26.501745] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:21.934 [2024-11-05 16:38:26.502010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.502040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:21.934 [2024-11-05 16:38:26.502102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:21.934 [2024-11-05 16:38:26.502118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.193 #35 NEW cov: 12466 ft: 15221 corp: 24/619b lim: 35 exec/s: 35 rss: 75Mb L: 17/35 MS: 1 InsertRepeatedBytes- 00:14:22.193 [2024-11-05 16:38:26.542233] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:22.193 [2024-11-05 16:38:26.542494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.193 [2024-11-05 16:38:26.542521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.193 [2024-11-05 16:38:26.542583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.193 [2024-11-05 16:38:26.542597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.542656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.542676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.542728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.542758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.542819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.542836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:22.194 #36 NEW cov: 12466 ft: 15249 corp: 25/654b lim: 35 exec/s: 36 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:14:22.194 [2024-11-05 16:38:26.582468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a3900ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.582494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.582569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.582584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.582641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.582655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.582710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.582729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:22.194 #37 NEW cov: 12466 ft: 15291 corp: 26/682b lim: 35 exec/s: 37 rss: 75Mb L: 28/35 MS: 1 ChangeByte- 00:14:22.194 [2024-11-05 16:38:26.622104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:bbbb000a cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.622129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.194 #38 NEW cov: 12466 ft: 15319 corp: 27/695b lim: 35 exec/s: 38 rss: 75Mb L: 13/35 MS: 1 ChangeBinInt- 00:14:22.194 [2024-11-05 16:38:26.662626] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:22.194 [2024-11-05 16:38:26.662911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.662937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.662996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.663010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.663066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.663081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.663143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:bb0a00bb cdw11:0000bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.663158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.663217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.663234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:22.194 #39 NEW cov: 12466 ft: 15332 corp: 28/730b lim: 35 exec/s: 39 rss: 75Mb L: 35/35 MS: 1 CopyPart- 00:14:22.194 [2024-11-05 16:38:26.702277] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:22.194 [2024-11-05 16:38:26.702554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:000000d4 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.702581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.702640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.702657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.194 #42 NEW cov: 12466 ft: 15395 corp: 29/750b lim: 35 exec/s: 42 rss: 75Mb L: 20/35 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:14:22.194 [2024-11-05 16:38:26.742637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:bb2a000a cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.742663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.194 [2024-11-05 16:38:26.742741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:43bb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.194 [2024-11-05 16:38:26.742756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.453 #43 NEW cov: 12466 ft: 15415 corp: 30/764b lim: 35 exec/s: 43 rss: 75Mb L: 14/35 MS: 1 InsertByte- 00:14:22.453 [2024-11-05 16:38:26.802819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.453 [2024-11-05 16:38:26.802844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.453 [2024-11-05 16:38:26.802906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:0abb00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.802920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.454 #44 NEW cov: 12466 ft: 15491 corp: 31/780b lim: 35 exec/s: 44 rss: 75Mb L: 16/35 MS: 1 EraseBytes- 00:14:22.454 [2024-11-05 16:38:26.843190] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:22.454 [2024-11-05 16:38:26.843469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.843496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.843558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:242400bb cdw11:24002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.843573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.843634] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:24240024 cdw11:bb002424 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.843649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.843704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:0abb0024 cdw11:bb00bb24 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.843724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.843783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:bb000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.843800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:22.454 #45 NEW cov: 12466 ft: 15494 corp: 32/815b lim: 35 exec/s: 45 rss: 75Mb L: 35/35 MS: 1 ShuffleBytes- 00:14:22.454 [2024-11-05 16:38:26.903532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0abb00ff cdw11:3300bb33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.903557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.903633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:33330033 cdw11:33003333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.903649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.903707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:33330033 cdw11:33003333 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.903726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.903788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:33330033 cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.903802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:22.454 [2024-11-05 16:38:26.903860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:bb0a00bb cdw11:bb00bbbb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:22.454 [2024-11-05 16:38:26.903875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:22.454 #46 NEW cov: 12466 ft: 15503 corp: 33/850b lim: 35 exec/s: 23 rss: 75Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:14:22.454 #46 DONE cov: 12466 ft: 15503 corp: 33/850b lim: 35 exec/s: 23 rss: 75Mb 00:14:22.454 Done 46 runs in 2 second(s) 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:22.713 16:38:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:14:22.713 [2024-11-05 16:38:27.087313] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:22.713 [2024-11-05 16:38:27.087387] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3521638 ] 00:14:22.973 [2024-11-05 16:38:27.363557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.973 [2024-11-05 16:38:27.411884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.973 [2024-11-05 16:38:27.475986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.973 [2024-11-05 16:38:27.492220] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:14:22.973 INFO: Running with entropic power schedule (0xFF, 100). 00:14:22.973 INFO: Seed: 2197335841 00:14:22.973 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:22.973 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:22.973 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:14:22.973 INFO: A corpus is not provided, starting from an empty corpus 00:14:22.973 #2 INITED exec/s: 0 rss: 66Mb 00:14:22.973 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:22.973 This may also happen if the target rejected all inputs we tried so far 00:14:23.491 NEW_FUNC[1/704]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:14:23.491 NEW_FUNC[2/704]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:23.491 #17 NEW cov: 12130 ft: 12129 corp: 2/18b lim: 20 exec/s: 0 rss: 74Mb L: 17/17 MS: 5 ChangeByte-CrossOver-EraseBytes-CopyPart-InsertRepeatedBytes- 00:14:23.750 #18 NEW cov: 12260 ft: 12677 corp: 3/35b lim: 20 exec/s: 0 rss: 74Mb L: 17/17 MS: 1 ChangeBinInt- 00:14:23.750 #19 NEW cov: 12274 ft: 13179 corp: 4/50b lim: 20 exec/s: 0 rss: 74Mb L: 15/17 MS: 1 EraseBytes- 00:14:23.750 #20 NEW cov: 12359 ft: 13488 corp: 5/67b lim: 20 exec/s: 0 rss: 74Mb L: 17/17 MS: 1 CMP- DE: "\015\000\000\000"- 00:14:23.750 #21 NEW cov: 12359 ft: 13591 corp: 6/84b lim: 20 exec/s: 0 rss: 74Mb L: 17/17 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:14:24.009 #22 NEW cov: 12359 ft: 13628 corp: 7/99b lim: 20 exec/s: 0 rss: 74Mb L: 15/17 MS: 1 ChangeBit- 00:14:24.009 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:24.009 #23 NEW cov: 12382 ft: 13769 corp: 8/118b lim: 20 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:14:24.009 NEW_FUNC[1/4]: 0x1366578 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3482 00:14:24.009 NEW_FUNC[2/4]: 0x13670f8 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3424 00:14:24.009 #24 NEW cov: 12465 ft: 13881 corp: 9/137b lim: 20 exec/s: 24 rss: 74Mb L: 19/19 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:14:24.009 [2024-11-05 16:38:28.561175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.009 [2024-11-05 16:38:28.561230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:24.268 NEW_FUNC[1/15]: 0x1859148 in nvme_ctrlr_queue_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3300 00:14:24.268 NEW_FUNC[2/15]: 0x187dbb8 in nvme_ctrlr_process_async_event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:3260 00:14:24.268 #25 NEW cov: 12684 ft: 14207 corp: 10/154b lim: 20 exec/s: 25 rss: 75Mb L: 17/19 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:14:24.268 [2024-11-05 16:38:28.651409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.268 [2024-11-05 16:38:28.651449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:24.268 NEW_FUNC[1/1]: 0x158bba8 in _nvmf_tcp_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:3649 00:14:24.268 #26 NEW cov: 12711 ft: 14379 corp: 11/171b lim: 20 exec/s: 26 rss: 75Mb L: 17/19 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:14:24.268 #27 NEW cov: 12711 ft: 14401 corp: 12/186b lim: 20 exec/s: 27 rss: 75Mb L: 15/19 MS: 1 CrossOver- 00:14:24.268 #28 NEW cov: 12711 ft: 14417 corp: 13/205b lim: 20 exec/s: 28 rss: 75Mb L: 19/19 MS: 1 PersAutoDict- DE: "\015\000\000\000"- 00:14:24.528 #29 NEW cov: 12711 ft: 14432 corp: 14/220b lim: 20 exec/s: 29 rss: 75Mb L: 15/19 MS: 1 CopyPart- 00:14:24.528 #30 NEW cov: 12711 ft: 14451 corp: 15/233b lim: 20 exec/s: 30 rss: 75Mb L: 13/19 MS: 1 EraseBytes- 00:14:24.528 [2024-11-05 16:38:28.942250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.528 [2024-11-05 16:38:28.942290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:24.528 #31 NEW cov: 12711 ft: 14479 corp: 16/252b lim: 20 exec/s: 31 rss: 75Mb L: 19/19 MS: 1 CrossOver- 00:14:24.528 [2024-11-05 16:38:29.022454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.528 [2024-11-05 16:38:29.022492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:24.528 #32 NEW cov: 12711 ft: 14509 corp: 17/271b lim: 20 exec/s: 32 rss: 75Mb L: 19/19 MS: 1 CopyPart- 00:14:24.528 [2024-11-05 16:38:29.102764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.528 [2024-11-05 16:38:29.102801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:24.528 [2024-11-05 16:38:29.102959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.528 [2024-11-05 16:38:29.102983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:24.787 #33 NEW cov: 12712 ft: 14814 corp: 18/290b lim: 20 exec/s: 33 rss: 75Mb L: 19/19 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:14:24.787 #34 NEW cov: 12712 ft: 14825 corp: 19/303b lim: 20 exec/s: 34 rss: 75Mb L: 13/19 MS: 1 ChangeBinInt- 00:14:24.787 #35 NEW cov: 12712 ft: 14836 corp: 20/320b lim: 20 exec/s: 35 rss: 75Mb L: 17/19 MS: 1 ShuffleBytes- 00:14:24.787 #36 NEW cov: 12712 ft: 14851 corp: 21/335b lim: 20 exec/s: 36 rss: 75Mb L: 15/19 MS: 1 CrossOver- 00:14:24.787 [2024-11-05 16:38:29.333387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:24.787 [2024-11-05 16:38:29.333425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:25.046 #37 NEW cov: 12712 ft: 14945 corp: 22/354b lim: 20 exec/s: 37 rss: 75Mb L: 19/19 MS: 1 ChangeBinInt- 00:14:25.046 [2024-11-05 16:38:29.413604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.046 [2024-11-05 16:38:29.413641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:25.046 [2024-11-05 16:38:29.413798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.046 [2024-11-05 16:38:29.413822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:25.046 #38 NEW cov: 12712 ft: 14966 corp: 23/371b lim: 20 exec/s: 38 rss: 75Mb L: 17/19 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:14:25.046 [2024-11-05 16:38:29.493822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:25.046 [2024-11-05 16:38:29.493858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:25.046 #39 NEW cov: 12712 ft: 14989 corp: 24/388b lim: 20 exec/s: 39 rss: 75Mb L: 17/19 MS: 1 ChangeBit- 00:14:25.046 #40 NEW cov: 12712 ft: 14995 corp: 25/405b lim: 20 exec/s: 20 rss: 75Mb L: 17/19 MS: 1 ChangeBit- 00:14:25.046 #40 DONE cov: 12712 ft: 14995 corp: 25/405b lim: 20 exec/s: 20 rss: 75Mb 00:14:25.046 ###### Recommended dictionary. ###### 00:14:25.046 "\015\000\000\000" # Uses: 4 00:14:25.046 "\001\000\000\000\000\000\000\000" # Uses: 3 00:14:25.046 ###### End of recommended dictionary. ###### 00:14:25.046 Done 40 runs in 2 second(s) 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:25.304 16:38:29 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:14:25.304 [2024-11-05 16:38:29.722569] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:25.304 [2024-11-05 16:38:29.722627] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522000 ] 00:14:25.563 [2024-11-05 16:38:29.965477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.563 [2024-11-05 16:38:30.014135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.563 [2024-11-05 16:38:30.087840] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.563 [2024-11-05 16:38:30.104081] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:14:25.563 INFO: Running with entropic power schedule (0xFF, 100). 00:14:25.563 INFO: Seed: 514372538 00:14:25.563 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:25.563 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:25.563 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:14:25.563 INFO: A corpus is not provided, starting from an empty corpus 00:14:25.563 #2 INITED exec/s: 0 rss: 66Mb 00:14:25.563 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:25.563 This may also happen if the target rejected all inputs we tried so far 00:14:25.822 [2024-11-05 16:38:30.150008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:25.822 [2024-11-05 16:38:30.150039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:25.822 [2024-11-05 16:38:30.150097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:25.822 [2024-11-05 16:38:30.150111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.081 NEW_FUNC[1/716]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:14:26.081 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:26.081 #9 NEW cov: 12248 ft: 12228 corp: 2/15b lim: 35 exec/s: 0 rss: 73Mb L: 14/14 MS: 2 InsertByte-InsertRepeatedBytes- 00:14:26.081 [2024-11-05 16:38:30.522966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.081 [2024-11-05 16:38:30.523027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.081 [2024-11-05 16:38:30.523136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.081 [2024-11-05 16:38:30.523161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.081 #14 NEW cov: 12362 ft: 12797 corp: 3/33b lim: 35 exec/s: 0 rss: 73Mb L: 18/18 MS: 5 ShuffleBytes-CrossOver-CrossOver-CopyPart-InsertRepeatedBytes- 00:14:26.081 [2024-11-05 16:38:30.592830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.082 [2024-11-05 16:38:30.592875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.082 [2024-11-05 16:38:30.592969] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.082 [2024-11-05 16:38:30.592992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.082 #15 NEW cov: 12368 ft: 13027 corp: 4/47b lim: 35 exec/s: 0 rss: 73Mb L: 14/18 MS: 1 ChangeBinInt- 00:14:26.340 [2024-11-05 16:38:30.682674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.340 [2024-11-05 16:38:30.682720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.340 #16 NEW cov: 12453 ft: 13940 corp: 5/55b lim: 35 exec/s: 0 rss: 73Mb L: 8/18 MS: 1 EraseBytes- 00:14:26.341 [2024-11-05 16:38:30.754152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.754189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.754286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.754307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.754402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.754422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.754512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.754532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.341 #17 NEW cov: 12453 ft: 14516 corp: 6/88b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:14:26.341 [2024-11-05 16:38:30.824617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.824662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.824771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.824793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.824894] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f99 cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.824915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.825018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.825040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.341 #18 NEW cov: 12453 ft: 14558 corp: 7/121b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:14:26.341 [2024-11-05 16:38:30.925043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.925080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.925176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.925199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.925290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.925312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.341 [2024-11-05 16:38:30.925409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f314f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.341 [2024-11-05 16:38:30.925430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.600 #19 NEW cov: 12453 ft: 14675 corp: 8/155b lim: 35 exec/s: 0 rss: 73Mb L: 34/34 MS: 1 InsertByte- 00:14:26.600 [2024-11-05 16:38:30.995237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:30.995274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:30.995377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:30.995398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:30.995495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f99 cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:30.995516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:30.995615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:30.995636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.600 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:26.600 #20 NEW cov: 12476 ft: 14756 corp: 9/188b lim: 35 exec/s: 0 rss: 73Mb L: 33/34 MS: 1 ShuffleBytes- 00:14:26.600 [2024-11-05 16:38:31.095344] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:31.095380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:31.095476] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:4f4f4f31 cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:31.095498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:31.095591] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f4f cdw11:4f0a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:31.095612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.600 #21 NEW cov: 12476 ft: 14997 corp: 10/209b lim: 35 exec/s: 21 rss: 73Mb L: 21/34 MS: 1 EraseBytes- 00:14:26.600 [2024-11-05 16:38:31.186128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:31.186166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:31.186270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:31.186295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.600 [2024-11-05 16:38:31.186404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f99 cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.600 [2024-11-05 16:38:31.186425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.186524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.186546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.860 #22 NEW cov: 12476 ft: 15029 corp: 11/242b lim: 35 exec/s: 22 rss: 73Mb L: 33/34 MS: 1 ChangeBinInt- 00:14:26.860 [2024-11-05 16:38:31.246521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.246559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.246658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.246680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.246782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.246803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.246898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f314f cdw11:4f4f0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.246921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.860 #23 NEW cov: 12476 ft: 15039 corp: 12/276b lim: 35 exec/s: 23 rss: 73Mb L: 34/34 MS: 1 ChangeByte- 00:14:26.860 [2024-11-05 16:38:31.316886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:003a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.316922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.317021] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:004f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.317043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.317146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:4f4f4f99 cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.317167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.317256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.317277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:26.860 #24 NEW cov: 12476 ft: 15054 corp: 13/309b lim: 35 exec/s: 24 rss: 73Mb L: 33/34 MS: 1 ChangeByte- 00:14:26.860 [2024-11-05 16:38:31.406519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.406560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:26.860 [2024-11-05 16:38:31.406657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:26.860 [2024-11-05 16:38:31.406679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:26.860 #25 NEW cov: 12476 ft: 15074 corp: 14/323b lim: 35 exec/s: 25 rss: 73Mb L: 14/34 MS: 1 ChangeBinInt- 00:14:27.119 [2024-11-05 16:38:31.466684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.466725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.119 [2024-11-05 16:38:31.466820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ff00ffff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.466843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.119 #26 NEW cov: 12476 ft: 15117 corp: 15/337b lim: 35 exec/s: 26 rss: 73Mb L: 14/34 MS: 1 CMP- DE: "\376\377\377\377\000\000\000\000"- 00:14:27.119 [2024-11-05 16:38:31.557995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.558031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.119 [2024-11-05 16:38:31.558121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.558142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.119 [2024-11-05 16:38:31.558238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:9700000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.558259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:27.119 [2024-11-05 16:38:31.558357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.558378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:27.119 #27 NEW cov: 12476 ft: 15142 corp: 16/365b lim: 35 exec/s: 27 rss: 73Mb L: 28/34 MS: 1 CrossOver- 00:14:27.119 [2024-11-05 16:38:31.657920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.657958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.119 [2024-11-05 16:38:31.658060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.658081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.119 [2024-11-05 16:38:31.658182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.119 [2024-11-05 16:38:31.658203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:27.402 #28 NEW cov: 12476 ft: 15158 corp: 17/389b lim: 35 exec/s: 28 rss: 74Mb L: 24/34 MS: 1 InsertRepeatedBytes- 00:14:27.402 [2024-11-05 16:38:31.749406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.749447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.749549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.749570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.749676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.749697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.749800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9700000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.749822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.749926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.749947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:27.402 #29 NEW cov: 12476 ft: 15222 corp: 18/424b lim: 35 exec/s: 29 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:14:27.402 [2024-11-05 16:38:31.849674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000208 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.849718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.849814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.849836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.849933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:0e000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.849954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.850056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:9700000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.850077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.850173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.850195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:27.402 #30 NEW cov: 12476 ft: 15246 corp: 19/459b lim: 35 exec/s: 30 rss: 74Mb L: 35/35 MS: 1 ChangeBit- 00:14:27.402 [2024-11-05 16:38:31.949735] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.949772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.949874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.949899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.949997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:994f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.950020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:27.402 [2024-11-05 16:38:31.950124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:4f4f4f4f cdw11:4f4f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.402 [2024-11-05 16:38:31.950146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:27.662 #31 NEW cov: 12476 ft: 15254 corp: 20/493b lim: 35 exec/s: 31 rss: 74Mb L: 34/35 MS: 1 CrossOver- 00:14:27.662 [2024-11-05 16:38:32.049513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.662 [2024-11-05 16:38:32.049549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.662 [2024-11-05 16:38:32.049649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.662 [2024-11-05 16:38:32.049670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.662 #32 NEW cov: 12476 ft: 15307 corp: 21/508b lim: 35 exec/s: 32 rss: 74Mb L: 15/35 MS: 1 EraseBytes- 00:14:27.662 [2024-11-05 16:38:32.140337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.662 [2024-11-05 16:38:32.140373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:27.662 [2024-11-05 16:38:32.140467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00ff0000 cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.662 [2024-11-05 16:38:32.140490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:27.662 [2024-11-05 16:38:32.140588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:fffff1ff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:27.662 [2024-11-05 16:38:32.140610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:27.662 #33 NEW cov: 12476 ft: 15337 corp: 22/533b lim: 35 exec/s: 16 rss: 74Mb L: 25/35 MS: 1 InsertByte- 00:14:27.662 #33 DONE cov: 12476 ft: 15337 corp: 22/533b lim: 35 exec/s: 16 rss: 74Mb 00:14:27.662 ###### Recommended dictionary. ###### 00:14:27.662 "\376\377\377\377\000\000\000\000" # Uses: 0 00:14:27.662 ###### End of recommended dictionary. ###### 00:14:27.662 Done 33 runs in 2 second(s) 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:27.922 16:38:32 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:14:27.922 [2024-11-05 16:38:32.315667] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:27.922 [2024-11-05 16:38:32.315726] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522359 ] 00:14:28.180 [2024-11-05 16:38:32.559697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.181 [2024-11-05 16:38:32.610038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.181 [2024-11-05 16:38:32.674077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.181 [2024-11-05 16:38:32.690304] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:14:28.181 INFO: Running with entropic power schedule (0xFF, 100). 00:14:28.181 INFO: Seed: 3100386198 00:14:28.181 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:28.181 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:28.181 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:14:28.181 INFO: A corpus is not provided, starting from an empty corpus 00:14:28.181 #2 INITED exec/s: 0 rss: 66Mb 00:14:28.181 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:28.181 This may also happen if the target rejected all inputs we tried so far 00:14:28.181 [2024-11-05 16:38:32.736392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.181 [2024-11-05 16:38:32.736424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.181 [2024-11-05 16:38:32.736482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.181 [2024-11-05 16:38:32.736498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.181 [2024-11-05 16:38:32.736552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.181 [2024-11-05 16:38:32.736567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.698 NEW_FUNC[1/716]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:14:28.698 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:28.698 #3 NEW cov: 12260 ft: 12257 corp: 2/33b lim: 45 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:14:28.698 [2024-11-05 16:38:33.197693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.197738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.197798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.197814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.197873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.197888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.698 #6 NEW cov: 12373 ft: 12800 corp: 3/68b lim: 45 exec/s: 0 rss: 73Mb L: 35/35 MS: 3 InsertByte-InsertByte-CrossOver- 00:14:28.698 [2024-11-05 16:38:33.237905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cececece cdw11:cece0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.237933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.238011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cececece cdw11:cece0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.238026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.238096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:cececece cdw11:cece0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.238110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.238166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:cececece cdw11:cece0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.238179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:28.698 #7 NEW cov: 12379 ft: 13451 corp: 4/108b lim: 45 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:14:28.698 [2024-11-05 16:38:33.278033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:09090a09 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.278060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.278118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:09090909 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.278132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.278190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:09090909 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.278204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.698 [2024-11-05 16:38:33.278257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:09090909 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.698 [2024-11-05 16:38:33.278271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:28.957 #8 NEW cov: 12464 ft: 13688 corp: 5/150b lim: 45 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:14:28.957 [2024-11-05 16:38:33.317903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.957 [2024-11-05 16:38:33.317929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.957 [2024-11-05 16:38:33.317990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.957 [2024-11-05 16:38:33.318005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.957 [2024-11-05 16:38:33.318062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.957 [2024-11-05 16:38:33.318075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.957 #9 NEW cov: 12464 ft: 13849 corp: 6/185b lim: 45 exec/s: 0 rss: 73Mb L: 35/42 MS: 1 CopyPart- 00:14:28.957 [2024-11-05 16:38:33.377890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.957 [2024-11-05 16:38:33.377916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.957 [2024-11-05 16:38:33.377978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.957 [2024-11-05 16:38:33.377992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.957 #10 NEW cov: 12464 ft: 14197 corp: 7/210b lim: 45 exec/s: 0 rss: 73Mb L: 25/42 MS: 1 EraseBytes- 00:14:28.957 [2024-11-05 16:38:33.417814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:81814381 cdw11:81810004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.957 [2024-11-05 16:38:33.417840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.958 #13 NEW cov: 12464 ft: 14968 corp: 8/224b lim: 45 exec/s: 0 rss: 73Mb L: 14/42 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:14:28.958 [2024-11-05 16:38:33.458316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.458342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.958 [2024-11-05 16:38:33.458419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.458434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.958 [2024-11-05 16:38:33.458493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.458507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.958 #14 NEW cov: 12464 ft: 15010 corp: 9/256b lim: 45 exec/s: 0 rss: 73Mb L: 32/42 MS: 1 CrossOver- 00:14:28.958 [2024-11-05 16:38:33.518729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.518754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:28.958 [2024-11-05 16:38:33.518813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.518836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:28.958 [2024-11-05 16:38:33.518896] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393941 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.518909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:28.958 [2024-11-05 16:38:33.518966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:39393939 cdw11:390a0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:28.958 [2024-11-05 16:38:33.518980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.217 #15 NEW cov: 12464 ft: 15076 corp: 10/292b lim: 45 exec/s: 0 rss: 73Mb L: 36/42 MS: 1 InsertByte- 00:14:29.217 [2024-11-05 16:38:33.578719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.578743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.578802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.578817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.578873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.578887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.217 #16 NEW cov: 12464 ft: 15140 corp: 11/327b lim: 45 exec/s: 0 rss: 73Mb L: 35/42 MS: 1 CopyPart- 00:14:29.217 [2024-11-05 16:38:33.618961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.618986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.619063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.619078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.619138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00390000 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.619152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.619211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.619225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.217 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:29.217 #17 NEW cov: 12487 ft: 15175 corp: 12/370b lim: 45 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:14:29.217 [2024-11-05 16:38:33.658676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.658702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.658766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.658780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.217 #18 NEW cov: 12487 ft: 15205 corp: 13/393b lim: 45 exec/s: 0 rss: 73Mb L: 23/43 MS: 1 CrossOver- 00:14:29.217 [2024-11-05 16:38:33.698814] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39394381 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.698839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.698900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:81810004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.698915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.217 #19 NEW cov: 12487 ft: 15238 corp: 14/418b lim: 45 exec/s: 19 rss: 73Mb L: 25/43 MS: 1 CrossOver- 00:14:29.217 [2024-11-05 16:38:33.759211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.759237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.759297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939387a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.759311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.217 [2024-11-05 16:38:33.759387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.217 [2024-11-05 16:38:33.759401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.217 #20 NEW cov: 12487 ft: 15284 corp: 15/453b lim: 45 exec/s: 20 rss: 73Mb L: 35/43 MS: 1 ChangeASCIIInt- 00:14:29.476 [2024-11-05 16:38:33.819553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.476 [2024-11-05 16:38:33.819581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.476 [2024-11-05 16:38:33.819639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.476 [2024-11-05 16:38:33.819654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.476 [2024-11-05 16:38:33.819716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.476 [2024-11-05 16:38:33.819731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.476 [2024-11-05 16:38:33.819789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:39793939 cdw11:79790003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.476 [2024-11-05 16:38:33.819803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.476 #21 NEW cov: 12487 ft: 15295 corp: 16/494b lim: 45 exec/s: 21 rss: 74Mb L: 41/43 MS: 1 InsertRepeatedBytes- 00:14:29.476 [2024-11-05 16:38:33.859453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.476 [2024-11-05 16:38:33.859482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.476 [2024-11-05 16:38:33.859542] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.476 [2024-11-05 16:38:33.859556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:33.859612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.859626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.477 #22 NEW cov: 12487 ft: 15341 corp: 17/529b lim: 45 exec/s: 22 rss: 74Mb L: 35/43 MS: 1 ShuffleBytes- 00:14:29.477 [2024-11-05 16:38:33.899753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.899779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:33.899838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.899853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:33.899910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffff39ff cdw11:ff390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.899923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:33.899978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.899992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.477 #23 NEW cov: 12487 ft: 15352 corp: 18/565b lim: 45 exec/s: 23 rss: 74Mb L: 36/43 MS: 1 InsertRepeatedBytes- 00:14:29.477 [2024-11-05 16:38:33.959746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.959772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:33.959830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.959845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:33.959902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:33.959915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.477 #24 NEW cov: 12487 ft: 15372 corp: 19/600b lim: 45 exec/s: 24 rss: 74Mb L: 35/43 MS: 1 CrossOver- 00:14:29.477 [2024-11-05 16:38:34.019786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:34.019812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.477 [2024-11-05 16:38:34.019869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.477 [2024-11-05 16:38:34.019886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.736 #25 NEW cov: 12487 ft: 15398 corp: 20/625b lim: 45 exec/s: 25 rss: 74Mb L: 25/43 MS: 1 ShuffleBytes- 00:14:29.736 [2024-11-05 16:38:34.080088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.080113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.736 [2024-11-05 16:38:34.080173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.080187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.736 [2024-11-05 16:38:34.080246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.080259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.736 #26 NEW cov: 12487 ft: 15431 corp: 21/660b lim: 45 exec/s: 26 rss: 74Mb L: 35/43 MS: 1 CopyPart- 00:14:29.736 [2024-11-05 16:38:34.140460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.140485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.736 [2024-11-05 16:38:34.140544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.140559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.736 [2024-11-05 16:38:34.140616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.140629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.736 [2024-11-05 16:38:34.140690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00003939 cdw11:00290003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.736 [2024-11-05 16:38:34.140703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.737 #27 NEW cov: 12487 ft: 15465 corp: 22/701b lim: 45 exec/s: 27 rss: 74Mb L: 41/43 MS: 1 ChangeBinInt- 00:14:29.737 [2024-11-05 16:38:34.200457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.200482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.200539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.200553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.200613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.200626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.737 #28 NEW cov: 12487 ft: 15498 corp: 23/736b lim: 45 exec/s: 28 rss: 74Mb L: 35/43 MS: 1 ChangeByte- 00:14:29.737 [2024-11-05 16:38:34.260837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.260867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.260928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.260942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.261001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.261015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.261071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00003900 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.261086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.737 #29 NEW cov: 12487 ft: 15517 corp: 24/779b lim: 45 exec/s: 29 rss: 74Mb L: 43/43 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:14:29.737 [2024-11-05 16:38:34.300985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.301012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.301086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.301101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.301158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff3939ff cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.301172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.737 [2024-11-05 16:38:34.301228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:3939ff39 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.737 [2024-11-05 16:38:34.301242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.998 #30 NEW cov: 12487 ft: 15532 corp: 25/820b lim: 45 exec/s: 30 rss: 74Mb L: 41/43 MS: 1 CopyPart- 00:14:29.998 [2024-11-05 16:38:34.360945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.360971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.361030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939387a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.361044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.361100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.361114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.998 #31 NEW cov: 12487 ft: 15546 corp: 26/855b lim: 45 exec/s: 31 rss: 74Mb L: 35/43 MS: 1 ChangeBit- 00:14:29.998 [2024-11-05 16:38:34.421341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.421366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.421426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.421441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.421497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.421510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.421567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00003900 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.421581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.998 #32 NEW cov: 12487 ft: 15579 corp: 27/898b lim: 45 exec/s: 32 rss: 74Mb L: 43/43 MS: 1 ShuffleBytes- 00:14:29.998 [2024-11-05 16:38:34.481343] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.481370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.481430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.481444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.481503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.481516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.998 #33 NEW cov: 12487 ft: 15605 corp: 28/930b lim: 45 exec/s: 33 rss: 74Mb L: 32/43 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:14:29.998 [2024-11-05 16:38:34.521572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:09090a09 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.521598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.521658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:09090909 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.521672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.521725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:09090909 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.521739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.521794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:09090909 cdw11:09090000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.521808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:29.998 #34 NEW cov: 12487 ft: 15616 corp: 29/967b lim: 45 exec/s: 34 rss: 74Mb L: 37/43 MS: 1 EraseBytes- 00:14:29.998 [2024-11-05 16:38:34.581614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.581641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.581701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939397a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.581719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:29.998 [2024-11-05 16:38:34.581777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393979 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:29.998 [2024-11-05 16:38:34.581793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:30.258 #35 NEW cov: 12487 ft: 15625 corp: 30/1002b lim: 45 exec/s: 35 rss: 74Mb L: 35/43 MS: 1 ChangeBit- 00:14:30.258 [2024-11-05 16:38:34.621928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.621954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.622015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:3939387a cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.622030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.622087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.622101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.622158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:39393939 cdw11:39390000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.622172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:30.258 #36 NEW cov: 12487 ft: 15634 corp: 31/1039b lim: 45 exec/s: 36 rss: 74Mb L: 37/43 MS: 1 CrossOver- 00:14:30.258 [2024-11-05 16:38:34.661852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39003039 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.661877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.661937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39390000 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.661951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.662008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.662023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:30.258 #37 NEW cov: 12487 ft: 15648 corp: 32/1072b lim: 45 exec/s: 37 rss: 74Mb L: 33/43 MS: 1 EraseBytes- 00:14:30.258 [2024-11-05 16:38:34.721984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:39393039 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.722009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.722073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:39393939 cdw11:39390001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.722088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:30.258 [2024-11-05 16:38:34.722146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:390a3939 cdw11:395f0002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:30.258 [2024-11-05 16:38:34.722159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:30.258 #38 NEW cov: 12487 ft: 15653 corp: 33/1101b lim: 45 exec/s: 19 rss: 75Mb L: 29/43 MS: 1 InsertRepeatedBytes- 00:14:30.258 #38 DONE cov: 12487 ft: 15653 corp: 33/1101b lim: 45 exec/s: 19 rss: 75Mb 00:14:30.258 ###### Recommended dictionary. ###### 00:14:30.258 "\000\000\000\000\000\000\000\000" # Uses: 1 00:14:30.258 "\377\377\377\377\377\377\377\377" # Uses: 0 00:14:30.258 ###### End of recommended dictionary. ###### 00:14:30.258 Done 38 runs in 2 second(s) 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:30.518 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:30.519 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:30.519 16:38:34 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:14:30.519 [2024-11-05 16:38:34.931691] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:30.519 [2024-11-05 16:38:34.931789] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522712 ] 00:14:30.778 [2024-11-05 16:38:35.204304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.778 [2024-11-05 16:38:35.251992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.778 [2024-11-05 16:38:35.315945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.778 [2024-11-05 16:38:35.332177] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:14:30.778 INFO: Running with entropic power schedule (0xFF, 100). 00:14:30.778 INFO: Seed: 1447404996 00:14:31.036 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:31.036 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:31.036 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:14:31.036 INFO: A corpus is not provided, starting from an empty corpus 00:14:31.036 #2 INITED exec/s: 0 rss: 66Mb 00:14:31.036 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:31.036 This may also happen if the target rejected all inputs we tried so far 00:14:31.036 [2024-11-05 16:38:35.377784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:14:31.036 [2024-11-05 16:38:35.377815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.296 NEW_FUNC[1/714]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:14:31.296 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:31.296 #3 NEW cov: 12177 ft: 12173 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:14:31.296 [2024-11-05 16:38:35.698772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.698811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.296 #4 NEW cov: 12290 ft: 12730 corp: 3/6b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:14:31.296 [2024-11-05 16:38:35.759125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.759152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.296 [2024-11-05 16:38:35.759211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.759225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.296 [2024-11-05 16:38:35.759280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.759295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.296 #5 NEW cov: 12296 ft: 13226 corp: 4/12b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CMP- DE: "\377\377\377%"- 00:14:31.296 [2024-11-05 16:38:35.798915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.798941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.296 #6 NEW cov: 12381 ft: 13549 corp: 5/14b lim: 10 exec/s: 0 rss: 73Mb L: 2/6 MS: 1 ChangeBit- 00:14:31.296 [2024-11-05 16:38:35.839047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000609e cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.839073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.296 #10 NEW cov: 12381 ft: 13626 corp: 6/16b lim: 10 exec/s: 0 rss: 73Mb L: 2/6 MS: 4 ShuffleBytes-ChangeByte-ChangeByte-InsertByte- 00:14:31.296 [2024-11-05 16:38:35.879168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:14:31.296 [2024-11-05 16:38:35.879193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.555 #11 NEW cov: 12381 ft: 13707 corp: 7/18b lim: 10 exec/s: 0 rss: 73Mb L: 2/6 MS: 1 ChangeBinInt- 00:14:31.555 [2024-11-05 16:38:35.919547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008aff cdw11:00000000 00:14:31.555 [2024-11-05 16:38:35.919572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.555 [2024-11-05 16:38:35.919626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.555 [2024-11-05 16:38:35.919640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.555 [2024-11-05 16:38:35.919695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:14:31.555 [2024-11-05 16:38:35.919709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.555 #12 NEW cov: 12381 ft: 13897 corp: 8/24b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:31.555 [2024-11-05 16:38:35.979704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008aff cdw11:00000000 00:14:31.555 [2024-11-05 16:38:35.979734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.555 [2024-11-05 16:38:35.979792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.555 [2024-11-05 16:38:35.979807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.555 [2024-11-05 16:38:35.979881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:14:31.555 [2024-11-05 16:38:35.979896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.555 #13 NEW cov: 12381 ft: 13931 corp: 9/30b lim: 10 exec/s: 0 rss: 73Mb L: 6/6 MS: 1 CrossOver- 00:14:31.555 [2024-11-05 16:38:36.039597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000609e cdw11:00000000 00:14:31.555 [2024-11-05 16:38:36.039623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.555 #14 NEW cov: 12381 ft: 13976 corp: 10/33b lim: 10 exec/s: 0 rss: 73Mb L: 3/6 MS: 1 InsertByte- 00:14:31.555 [2024-11-05 16:38:36.100030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008aff cdw11:00000000 00:14:31.555 [2024-11-05 16:38:36.100055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.556 [2024-11-05 16:38:36.100111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff6f cdw11:00000000 00:14:31.556 [2024-11-05 16:38:36.100125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.556 [2024-11-05 16:38:36.100180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:31.556 [2024-11-05 16:38:36.100194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.815 #15 NEW cov: 12381 ft: 14061 corp: 11/40b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertByte- 00:14:31.815 [2024-11-05 16:38:36.160087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006060 cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.160112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.815 [2024-11-05 16:38:36.160170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009e9e cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.160184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.815 #16 NEW cov: 12381 ft: 14212 corp: 12/45b lim: 10 exec/s: 0 rss: 73Mb L: 5/7 MS: 1 CopyPart- 00:14:31.815 [2024-11-05 16:38:36.220368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000609e cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.220393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.815 [2024-11-05 16:38:36.220453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.220466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.815 [2024-11-05 16:38:36.220521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.220534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.815 #17 NEW cov: 12381 ft: 14232 corp: 13/51b lim: 10 exec/s: 0 rss: 73Mb L: 6/7 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:31.815 [2024-11-05 16:38:36.260623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000609e cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.260648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.815 [2024-11-05 16:38:36.260705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.260726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.815 [2024-11-05 16:38:36.260779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.815 [2024-11-05 16:38:36.260809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.816 [2024-11-05 16:38:36.260863] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.260876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:31.816 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:31.816 #18 NEW cov: 12404 ft: 14491 corp: 14/59b lim: 10 exec/s: 0 rss: 73Mb L: 8/8 MS: 1 InsertRepeatedBytes- 00:14:31.816 [2024-11-05 16:38:36.300590] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.300614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.816 [2024-11-05 16:38:36.300672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.300686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.816 [2024-11-05 16:38:36.300742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00002560 cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.300756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:31.816 #19 NEW cov: 12404 ft: 14511 corp: 15/65b lim: 10 exec/s: 0 rss: 74Mb L: 6/8 MS: 1 CrossOver- 00:14:31.816 [2024-11-05 16:38:36.360495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008a0a cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.360520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.816 #20 NEW cov: 12404 ft: 14522 corp: 16/67b lim: 10 exec/s: 20 rss: 74Mb L: 2/8 MS: 1 CopyPart- 00:14:31.816 [2024-11-05 16:38:36.400938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.400968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:31.816 [2024-11-05 16:38:36.401024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:31.816 [2024-11-05 16:38:36.401039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:31.816 [2024-11-05 16:38:36.401095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000609e cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.401108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.076 #21 NEW cov: 12404 ft: 14528 corp: 17/73b lim: 10 exec/s: 21 rss: 74Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.076 [2024-11-05 16:38:36.441384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff60 cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.441409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.076 [2024-11-05 16:38:36.441467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.441481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.076 [2024-11-05 16:38:36.441537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009eff cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.441550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.076 [2024-11-05 16:38:36.441604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.441618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:32.076 #22 NEW cov: 12404 ft: 14591 corp: 18/81b lim: 10 exec/s: 22 rss: 74Mb L: 8/8 MS: 1 ShuffleBytes- 00:14:32.076 [2024-11-05 16:38:36.500920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008afb cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.500946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.076 #23 NEW cov: 12404 ft: 14608 corp: 19/83b lim: 10 exec/s: 23 rss: 74Mb L: 2/8 MS: 1 ChangeBinInt- 00:14:32.076 [2024-11-05 16:38:36.541046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000250a cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.541071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.076 #24 NEW cov: 12404 ft: 14635 corp: 20/86b lim: 10 exec/s: 24 rss: 74Mb L: 3/8 MS: 1 CrossOver- 00:14:32.076 [2024-11-05 16:38:36.581179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002f9e cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.581204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.076 #25 NEW cov: 12404 ft: 14650 corp: 21/88b lim: 10 exec/s: 25 rss: 74Mb L: 2/8 MS: 1 ChangeByte- 00:14:32.076 [2024-11-05 16:38:36.621438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.621464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.076 [2024-11-05 16:38:36.621523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.621538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.076 #26 NEW cov: 12404 ft: 14659 corp: 22/93b lim: 10 exec/s: 26 rss: 74Mb L: 5/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.076 [2024-11-05 16:38:36.661554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000608a cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.661579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.076 [2024-11-05 16:38:36.661637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fb9e cdw11:00000000 00:14:32.076 [2024-11-05 16:38:36.661651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.336 #27 NEW cov: 12404 ft: 14669 corp: 23/98b lim: 10 exec/s: 27 rss: 74Mb L: 5/8 MS: 1 CrossOver- 00:14:32.336 [2024-11-05 16:38:36.701788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.336 [2024-11-05 16:38:36.701813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.336 [2024-11-05 16:38:36.701870] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:32.336 [2024-11-05 16:38:36.701885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.337 [2024-11-05 16:38:36.701938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.701952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.337 #28 NEW cov: 12404 ft: 14674 corp: 24/104b lim: 10 exec/s: 28 rss: 74Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.337 [2024-11-05 16:38:36.741600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a05 cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.741625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.337 #29 NEW cov: 12404 ft: 14733 corp: 25/107b lim: 10 exec/s: 29 rss: 74Mb L: 3/8 MS: 1 ShuffleBytes- 00:14:32.337 [2024-11-05 16:38:36.802063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.802089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.337 [2024-11-05 16:38:36.802144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.802158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.337 [2024-11-05 16:38:36.802228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00008afb cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.802242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.337 #30 NEW cov: 12404 ft: 14765 corp: 26/113b lim: 10 exec/s: 30 rss: 74Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.337 [2024-11-05 16:38:36.862145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.862170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.337 [2024-11-05 16:38:36.862229] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.862243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.337 #31 NEW cov: 12404 ft: 14775 corp: 27/117b lim: 10 exec/s: 31 rss: 74Mb L: 4/8 MS: 1 EraseBytes- 00:14:32.337 [2024-11-05 16:38:36.922496] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.922522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.337 [2024-11-05 16:38:36.922580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.922594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.337 [2024-11-05 16:38:36.922649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:14:32.337 [2024-11-05 16:38:36.922663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.596 #32 NEW cov: 12404 ft: 14783 corp: 28/123b lim: 10 exec/s: 32 rss: 74Mb L: 6/8 MS: 1 CopyPart- 00:14:32.596 [2024-11-05 16:38:36.962564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00008aff cdw11:00000000 00:14:32.596 [2024-11-05 16:38:36.962589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:36.962645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.596 [2024-11-05 16:38:36.962659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:36.962719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000250a cdw11:00000000 00:14:32.596 [2024-11-05 16:38:36.962733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.596 #33 NEW cov: 12404 ft: 14868 corp: 29/129b lim: 10 exec/s: 33 rss: 74Mb L: 6/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.596 [2024-11-05 16:38:37.002690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000200 cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.002723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:37.002781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00008aff cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.002795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:37.002849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.002863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.596 #34 NEW cov: 12404 ft: 14892 corp: 30/135b lim: 10 exec/s: 34 rss: 74Mb L: 6/8 MS: 1 CrossOver- 00:14:32.596 [2024-11-05 16:38:37.062859] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000609e cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.062886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:37.062941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.062956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:37.063028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fb25 cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.063044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.596 #35 NEW cov: 12404 ft: 14902 corp: 31/141b lim: 10 exec/s: 35 rss: 74Mb L: 6/8 MS: 1 ChangeBit- 00:14:32.596 [2024-11-05 16:38:37.122946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000608a cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.122971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.596 [2024-11-05 16:38:37.123029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fbde cdw11:00000000 00:14:32.596 [2024-11-05 16:38:37.123043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.596 #36 NEW cov: 12404 ft: 14916 corp: 32/146b lim: 10 exec/s: 36 rss: 74Mb L: 5/8 MS: 1 ChangeBit- 00:14:32.856 [2024-11-05 16:38:37.183243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.183270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.183326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff25 cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.183340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.183397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000607c cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.183411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.856 #37 NEW cov: 12404 ft: 14952 corp: 33/153b lim: 10 exec/s: 37 rss: 74Mb L: 7/8 MS: 1 InsertByte- 00:14:32.856 [2024-11-05 16:38:37.243252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.243276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.243351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.243365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.856 #38 NEW cov: 12404 ft: 14953 corp: 34/158b lim: 10 exec/s: 38 rss: 74Mb L: 5/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.856 [2024-11-05 16:38:37.283237] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000240a cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.283263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.856 #39 NEW cov: 12404 ft: 14970 corp: 35/160b lim: 10 exec/s: 39 rss: 74Mb L: 2/8 MS: 1 ChangeByte- 00:14:32.856 [2024-11-05 16:38:37.323647] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000060ff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.323673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.323722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.323736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.323791] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000259e cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.323822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.856 #40 NEW cov: 12404 ft: 14976 corp: 36/167b lim: 10 exec/s: 40 rss: 74Mb L: 7/8 MS: 1 PersAutoDict- DE: "\377\377\377%"- 00:14:32.856 [2024-11-05 16:38:37.363931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000acf cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.363960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.364017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.364031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.364088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.364103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:32.856 [2024-11-05 16:38:37.364160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:32.856 [2024-11-05 16:38:37.364174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:32.856 #41 NEW cov: 12404 ft: 14983 corp: 37/175b lim: 10 exec/s: 20 rss: 74Mb L: 8/8 MS: 1 CMP- DE: "\317\001\000\000"- 00:14:32.856 #41 DONE cov: 12404 ft: 14983 corp: 37/175b lim: 10 exec/s: 20 rss: 74Mb 00:14:32.856 ###### Recommended dictionary. ###### 00:14:32.856 "\377\377\377%" # Uses: 9 00:14:32.856 "\317\001\000\000" # Uses: 0 00:14:32.856 ###### End of recommended dictionary. ###### 00:14:32.856 Done 41 runs in 2 second(s) 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:33.116 16:38:37 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:14:33.116 [2024-11-05 16:38:37.568339] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:33.116 [2024-11-05 16:38:37.568411] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523072 ] 00:14:33.375 [2024-11-05 16:38:37.836505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.375 [2024-11-05 16:38:37.884236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.375 [2024-11-05 16:38:37.948242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.634 [2024-11-05 16:38:37.964464] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:14:33.634 INFO: Running with entropic power schedule (0xFF, 100). 00:14:33.634 INFO: Seed: 4079403702 00:14:33.634 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:33.634 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:33.634 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:14:33.634 INFO: A corpus is not provided, starting from an empty corpus 00:14:33.634 #2 INITED exec/s: 0 rss: 66Mb 00:14:33.634 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:33.634 This may also happen if the target rejected all inputs we tried so far 00:14:33.634 [2024-11-05 16:38:38.030231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:33.634 [2024-11-05 16:38:38.030269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.201 NEW_FUNC[1/714]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:14:34.201 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:34.201 #5 NEW cov: 12159 ft: 12154 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 3 ChangeByte-CopyPart-CopyPart- 00:14:34.201 [2024-11-05 16:38:38.521695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000017e1 cdw11:00000000 00:14:34.201 [2024-11-05 16:38:38.521767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.201 #6 NEW cov: 12289 ft: 12722 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:14:34.201 [2024-11-05 16:38:38.601696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000017e1 cdw11:00000000 00:14:34.201 [2024-11-05 16:38:38.601740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.201 #7 NEW cov: 12295 ft: 12911 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ShuffleBytes- 00:14:34.201 [2024-11-05 16:38:38.682206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.201 [2024-11-05 16:38:38.682240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.201 [2024-11-05 16:38:38.682308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.201 [2024-11-05 16:38:38.682328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:34.202 [2024-11-05 16:38:38.682391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffe1 cdw11:00000000 00:14:34.202 [2024-11-05 16:38:38.682409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:34.202 #8 NEW cov: 12380 ft: 13403 corp: 5/14b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:14:34.202 [2024-11-05 16:38:38.732387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.202 [2024-11-05 16:38:38.732421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.202 [2024-11-05 16:38:38.732489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.202 [2024-11-05 16:38:38.732512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:34.202 [2024-11-05 16:38:38.732576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae1 cdw11:00000000 00:14:34.202 [2024-11-05 16:38:38.732594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:34.459 #9 NEW cov: 12380 ft: 13480 corp: 6/21b lim: 10 exec/s: 0 rss: 73Mb L: 7/7 MS: 1 CrossOver- 00:14:34.459 [2024-11-05 16:38:38.812286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000027e1 cdw11:00000000 00:14:34.459 [2024-11-05 16:38:38.812320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.459 #11 NEW cov: 12380 ft: 13718 corp: 7/23b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 2 EraseBytes-InsertByte- 00:14:34.459 [2024-11-05 16:38:38.862401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:34.459 [2024-11-05 16:38:38.862434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.459 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:34.459 #12 NEW cov: 12403 ft: 13839 corp: 8/25b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 CopyPart- 00:14:34.459 [2024-11-05 16:38:38.912549] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:34.459 [2024-11-05 16:38:38.912582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.459 #13 NEW cov: 12403 ft: 13904 corp: 9/27b lim: 10 exec/s: 0 rss: 73Mb L: 2/7 MS: 1 ShuffleBytes- 00:14:34.459 [2024-11-05 16:38:38.962747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:34.459 [2024-11-05 16:38:38.962781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.459 #14 NEW cov: 12403 ft: 13995 corp: 10/29b lim: 10 exec/s: 14 rss: 73Mb L: 2/7 MS: 1 ShuffleBytes- 00:14:34.459 [2024-11-05 16:38:39.042948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e11c cdw11:00000000 00:14:34.459 [2024-11-05 16:38:39.042981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.718 #15 NEW cov: 12403 ft: 14066 corp: 11/31b lim: 10 exec/s: 15 rss: 73Mb L: 2/7 MS: 1 ChangeBinInt- 00:14:34.718 [2024-11-05 16:38:39.123167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000027f8 cdw11:00000000 00:14:34.718 [2024-11-05 16:38:39.123202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.718 #16 NEW cov: 12403 ft: 14106 corp: 12/33b lim: 10 exec/s: 16 rss: 73Mb L: 2/7 MS: 1 ChangeByte- 00:14:34.718 [2024-11-05 16:38:39.203675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.718 [2024-11-05 16:38:39.203708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.718 [2024-11-05 16:38:39.203780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff21 cdw11:00000000 00:14:34.718 [2024-11-05 16:38:39.203799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:34.718 [2024-11-05 16:38:39.203866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000ae1 cdw11:00000000 00:14:34.718 [2024-11-05 16:38:39.203884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:34.718 #17 NEW cov: 12403 ft: 14138 corp: 13/40b lim: 10 exec/s: 17 rss: 74Mb L: 7/7 MS: 1 ChangeByte- 00:14:34.718 [2024-11-05 16:38:39.283605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000017e0 cdw11:00000000 00:14:34.718 [2024-11-05 16:38:39.283638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.977 #18 NEW cov: 12403 ft: 14162 corp: 14/42b lim: 10 exec/s: 18 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:14:34.977 [2024-11-05 16:38:39.363822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000097e1 cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.363856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.977 #19 NEW cov: 12403 ft: 14180 corp: 15/44b lim: 10 exec/s: 19 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:14:34.977 [2024-11-05 16:38:39.414105] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000017e0 cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.414138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.977 [2024-11-05 16:38:39.414204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.414224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:34.977 #20 NEW cov: 12403 ft: 14331 corp: 16/48b lim: 10 exec/s: 20 rss: 74Mb L: 4/7 MS: 1 CrossOver- 00:14:34.977 [2024-11-05 16:38:39.494504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.494539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:34.977 [2024-11-05 16:38:39.494608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.494627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:34.977 [2024-11-05 16:38:39.494693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff7e cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.494719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:34.977 #21 NEW cov: 12403 ft: 14344 corp: 17/55b lim: 10 exec/s: 21 rss: 74Mb L: 7/7 MS: 1 ChangeByte- 00:14:34.977 [2024-11-05 16:38:39.544348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e1e0 cdw11:00000000 00:14:34.977 [2024-11-05 16:38:39.544382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.243 #22 NEW cov: 12403 ft: 14354 corp: 18/57b lim: 10 exec/s: 22 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:14:35.243 [2024-11-05 16:38:39.594469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000061e1 cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.594503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.243 #23 NEW cov: 12403 ft: 14406 corp: 19/59b lim: 10 exec/s: 23 rss: 74Mb L: 2/7 MS: 1 ChangeBit- 00:14:35.243 [2024-11-05 16:38:39.644627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00006b27 cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.644661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.243 #25 NEW cov: 12403 ft: 14416 corp: 20/62b lim: 10 exec/s: 25 rss: 74Mb L: 3/7 MS: 2 ChangeByte-CrossOver- 00:14:35.243 [2024-11-05 16:38:39.695052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.695085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.243 [2024-11-05 16:38:39.695157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.695177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:35.243 [2024-11-05 16:38:39.695245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.695264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:35.243 #26 NEW cov: 12403 ft: 14432 corp: 21/68b lim: 10 exec/s: 26 rss: 74Mb L: 6/7 MS: 1 EraseBytes- 00:14:35.243 [2024-11-05 16:38:39.744869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.744903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.243 #27 NEW cov: 12403 ft: 14464 corp: 22/70b lim: 10 exec/s: 27 rss: 74Mb L: 2/7 MS: 1 CopyPart- 00:14:35.243 [2024-11-05 16:38:39.795058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000027e1 cdw11:00000000 00:14:35.243 [2024-11-05 16:38:39.795092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.243 #28 NEW cov: 12403 ft: 14501 corp: 23/73b lim: 10 exec/s: 28 rss: 74Mb L: 3/7 MS: 1 InsertByte- 00:14:35.505 [2024-11-05 16:38:39.845475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005bff cdw11:00000000 00:14:35.505 [2024-11-05 16:38:39.845509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.505 [2024-11-05 16:38:39.845579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff0a cdw11:00000000 00:14:35.505 [2024-11-05 16:38:39.845598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:35.505 [2024-11-05 16:38:39.845661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000e1e1 cdw11:00000000 00:14:35.505 [2024-11-05 16:38:39.845679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:35.505 #29 NEW cov: 12403 ft: 14542 corp: 24/79b lim: 10 exec/s: 29 rss: 74Mb L: 6/7 MS: 1 ChangeByte- 00:14:35.505 [2024-11-05 16:38:39.925390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000061e1 cdw11:00000000 00:14:35.505 [2024-11-05 16:38:39.925425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.505 #30 NEW cov: 12403 ft: 14548 corp: 25/82b lim: 10 exec/s: 30 rss: 74Mb L: 3/7 MS: 1 InsertByte- 00:14:35.505 [2024-11-05 16:38:40.005942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000e100 cdw11:00000000 00:14:35.505 [2024-11-05 16:38:40.005977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:35.505 [2024-11-05 16:38:40.006045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.505 [2024-11-05 16:38:40.006065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:35.505 [2024-11-05 16:38:40.006130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.505 [2024-11-05 16:38:40.006149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:35.505 #31 NEW cov: 12403 ft: 14590 corp: 26/89b lim: 10 exec/s: 15 rss: 74Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:14:35.505 #31 DONE cov: 12403 ft: 14590 corp: 26/89b lim: 10 exec/s: 15 rss: 74Mb 00:14:35.505 Done 31 runs in 2 second(s) 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:35.802 16:38:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:14:35.802 [2024-11-05 16:38:40.213132] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:35.802 [2024-11-05 16:38:40.213206] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523425 ] 00:14:36.084 [2024-11-05 16:38:40.481404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.084 [2024-11-05 16:38:40.529506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.084 [2024-11-05 16:38:40.593481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.084 [2024-11-05 16:38:40.609744] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:14:36.084 INFO: Running with entropic power schedule (0xFF, 100). 00:14:36.084 INFO: Seed: 2428431998 00:14:36.084 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:36.084 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:36.084 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:14:36.084 INFO: A corpus is not provided, starting from an empty corpus 00:14:36.084 [2024-11-05 16:38:40.655400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.084 [2024-11-05 16:38:40.655430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.343 #2 INITED cov: 12205 ft: 12194 corp: 1/1b exec/s: 0 rss: 72Mb 00:14:36.343 [2024-11-05 16:38:40.695361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.695390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.343 #3 NEW cov: 12318 ft: 12709 corp: 2/2b lim: 5 exec/s: 0 rss: 72Mb L: 1/1 MS: 1 ShuffleBytes- 00:14:36.343 [2024-11-05 16:38:40.755699] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.755731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.343 [2024-11-05 16:38:40.755803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.755817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.343 #4 NEW cov: 12324 ft: 13721 corp: 3/4b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:14:36.343 [2024-11-05 16:38:40.815869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.815894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.343 [2024-11-05 16:38:40.815950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.815963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.343 #5 NEW cov: 12409 ft: 14012 corp: 4/6b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CopyPart- 00:14:36.343 [2024-11-05 16:38:40.856007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.856033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.343 [2024-11-05 16:38:40.856093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.856108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.343 #6 NEW cov: 12409 ft: 14106 corp: 5/8b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:14:36.343 [2024-11-05 16:38:40.896083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.896110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.343 [2024-11-05 16:38:40.896168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.343 [2024-11-05 16:38:40.896183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.601 #7 NEW cov: 12409 ft: 14180 corp: 6/10b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 ChangeByte- 00:14:36.601 [2024-11-05 16:38:40.956051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:40.956077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.601 #8 NEW cov: 12409 ft: 14240 corp: 7/11b lim: 5 exec/s: 0 rss: 72Mb L: 1/2 MS: 1 ChangeByte- 00:14:36.601 [2024-11-05 16:38:40.996337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:40.996366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.601 [2024-11-05 16:38:40.996441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:40.996456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.601 #9 NEW cov: 12409 ft: 14273 corp: 8/13b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:14:36.601 [2024-11-05 16:38:41.036495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:41.036520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.601 [2024-11-05 16:38:41.036580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:41.036595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.601 #10 NEW cov: 12409 ft: 14319 corp: 9/15b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 CrossOver- 00:14:36.601 [2024-11-05 16:38:41.096834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:41.096861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.601 [2024-11-05 16:38:41.096935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:41.096953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.601 [2024-11-05 16:38:41.097013] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:41.097027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:36.601 #11 NEW cov: 12409 ft: 14573 corp: 10/18b lim: 5 exec/s: 0 rss: 72Mb L: 3/3 MS: 1 InsertByte- 00:14:36.601 [2024-11-05 16:38:41.136603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.601 [2024-11-05 16:38:41.136630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.601 #12 NEW cov: 12409 ft: 14590 corp: 11/19b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 EraseBytes- 00:14:36.859 [2024-11-05 16:38:41.196774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.196799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.859 #13 NEW cov: 12409 ft: 14611 corp: 12/20b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 CrossOver- 00:14:36.859 [2024-11-05 16:38:41.256945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.256970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.859 #14 NEW cov: 12409 ft: 14662 corp: 13/21b lim: 5 exec/s: 0 rss: 72Mb L: 1/3 MS: 1 EraseBytes- 00:14:36.859 [2024-11-05 16:38:41.297555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.297585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.859 [2024-11-05 16:38:41.297642] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.297657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.859 [2024-11-05 16:38:41.297721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.297757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:36.859 [2024-11-05 16:38:41.297819] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.297834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:36.859 #15 NEW cov: 12409 ft: 14945 corp: 14/25b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 CrossOver- 00:14:36.859 [2024-11-05 16:38:41.357216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.357242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.859 #16 NEW cov: 12409 ft: 14977 corp: 15/26b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:14:36.859 [2024-11-05 16:38:41.397499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.397526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.859 [2024-11-05 16:38:41.397603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.397619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:36.859 #17 NEW cov: 12409 ft: 15008 corp: 16/28b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 CopyPart- 00:14:36.859 [2024-11-05 16:38:41.437601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.437627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:36.859 [2024-11-05 16:38:41.437700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:36.859 [2024-11-05 16:38:41.437720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.118 #18 NEW cov: 12409 ft: 15087 corp: 17/30b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:14:37.118 [2024-11-05 16:38:41.497799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.118 [2024-11-05 16:38:41.497824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.118 [2024-11-05 16:38:41.497895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.118 [2024-11-05 16:38:41.497922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.118 #19 NEW cov: 12409 ft: 15104 corp: 18/32b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 InsertByte- 00:14:37.118 [2024-11-05 16:38:41.538053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.118 [2024-11-05 16:38:41.538080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.118 [2024-11-05 16:38:41.538138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.118 [2024-11-05 16:38:41.538152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.118 [2024-11-05 16:38:41.538210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.118 [2024-11-05 16:38:41.538223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:37.376 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:37.376 #20 NEW cov: 12432 ft: 15149 corp: 19/35b lim: 5 exec/s: 20 rss: 74Mb L: 3/4 MS: 1 InsertByte- 00:14:37.376 [2024-11-05 16:38:41.858685] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.858728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.376 #21 NEW cov: 12432 ft: 15186 corp: 20/36b lim: 5 exec/s: 21 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:14:37.376 [2024-11-05 16:38:41.899208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.899234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.376 [2024-11-05 16:38:41.899290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.899305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.376 [2024-11-05 16:38:41.899358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.899372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:37.376 [2024-11-05 16:38:41.899427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.899440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:37.376 #22 NEW cov: 12432 ft: 15211 corp: 21/40b lim: 5 exec/s: 22 rss: 74Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:14:37.376 [2024-11-05 16:38:41.939125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.939150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.376 [2024-11-05 16:38:41.939207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.939222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.376 [2024-11-05 16:38:41.939282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.376 [2024-11-05 16:38:41.939295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:37.376 #23 NEW cov: 12432 ft: 15246 corp: 22/43b lim: 5 exec/s: 23 rss: 74Mb L: 3/4 MS: 1 CrossOver- 00:14:37.634 [2024-11-05 16:38:41.979256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:41.979282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.634 [2024-11-05 16:38:41.979339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:41.979353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.634 [2024-11-05 16:38:41.979407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:41.979420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:37.634 #24 NEW cov: 12432 ft: 15285 corp: 23/46b lim: 5 exec/s: 24 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:14:37.634 [2024-11-05 16:38:42.039242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.039267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.634 [2024-11-05 16:38:42.039323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.039337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.634 #25 NEW cov: 12432 ft: 15293 corp: 24/48b lim: 5 exec/s: 25 rss: 74Mb L: 2/4 MS: 1 ShuffleBytes- 00:14:37.634 [2024-11-05 16:38:42.079336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.079361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.634 [2024-11-05 16:38:42.079417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.079430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.634 #26 NEW cov: 12432 ft: 15307 corp: 25/50b lim: 5 exec/s: 26 rss: 74Mb L: 2/4 MS: 1 CrossOver- 00:14:37.634 [2024-11-05 16:38:42.119299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.119340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.634 #27 NEW cov: 12432 ft: 15337 corp: 26/51b lim: 5 exec/s: 27 rss: 74Mb L: 1/4 MS: 1 ChangeByte- 00:14:37.634 [2024-11-05 16:38:42.159597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.159622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.634 [2024-11-05 16:38:42.159679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.159695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.634 #28 NEW cov: 12432 ft: 15350 corp: 27/53b lim: 5 exec/s: 28 rss: 74Mb L: 2/4 MS: 1 EraseBytes- 00:14:37.634 [2024-11-05 16:38:42.199726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.199750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.634 [2024-11-05 16:38:42.199808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.634 [2024-11-05 16:38:42.199822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.891 #29 NEW cov: 12432 ft: 15393 corp: 28/55b lim: 5 exec/s: 29 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:14:37.891 [2024-11-05 16:38:42.260044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.891 [2024-11-05 16:38:42.260070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.891 [2024-11-05 16:38:42.260126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.891 [2024-11-05 16:38:42.260140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.891 [2024-11-05 16:38:42.260194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.891 [2024-11-05 16:38:42.260208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:37.891 #30 NEW cov: 12432 ft: 15402 corp: 29/58b lim: 5 exec/s: 30 rss: 74Mb L: 3/4 MS: 1 CopyPart- 00:14:37.891 [2024-11-05 16:38:42.320056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.891 [2024-11-05 16:38:42.320081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.891 [2024-11-05 16:38:42.320138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.891 [2024-11-05 16:38:42.320152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.891 #31 NEW cov: 12432 ft: 15408 corp: 30/60b lim: 5 exec/s: 31 rss: 74Mb L: 2/4 MS: 1 CopyPart- 00:14:37.891 [2024-11-05 16:38:42.380431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.891 [2024-11-05 16:38:42.380456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.891 [2024-11-05 16:38:42.380515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.892 [2024-11-05 16:38:42.380528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:37.892 [2024-11-05 16:38:42.380592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.892 [2024-11-05 16:38:42.380611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:37.892 #32 NEW cov: 12432 ft: 15458 corp: 31/63b lim: 5 exec/s: 32 rss: 74Mb L: 3/4 MS: 1 CopyPart- 00:14:37.892 [2024-11-05 16:38:42.420164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.892 [2024-11-05 16:38:42.420189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.892 #33 NEW cov: 12432 ft: 15476 corp: 32/64b lim: 5 exec/s: 33 rss: 74Mb L: 1/4 MS: 1 EraseBytes- 00:14:37.892 [2024-11-05 16:38:42.460436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.892 [2024-11-05 16:38:42.460463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:37.892 [2024-11-05 16:38:42.460521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:37.892 [2024-11-05 16:38:42.460535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:38.150 #34 NEW cov: 12432 ft: 15484 corp: 33/66b lim: 5 exec/s: 34 rss: 74Mb L: 2/4 MS: 1 ShuffleBytes- 00:14:38.150 [2024-11-05 16:38:42.500731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.500758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.500815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.500829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.500886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.500900] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.540664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.540689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.540747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.540761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:38.150 #36 NEW cov: 12432 ft: 15488 corp: 34/68b lim: 5 exec/s: 36 rss: 74Mb L: 2/4 MS: 2 InsertByte-EraseBytes- 00:14:38.150 [2024-11-05 16:38:42.581010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.581038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.581113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.581129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.581185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.581203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:38.150 #37 NEW cov: 12432 ft: 15497 corp: 35/71b lim: 5 exec/s: 37 rss: 74Mb L: 3/4 MS: 1 ChangeBit- 00:14:38.150 [2024-11-05 16:38:42.621268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.621294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.621351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.621365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.621420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.621433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:38.150 [2024-11-05 16:38:42.621489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.150 [2024-11-05 16:38:42.621502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:38.150 #38 NEW cov: 12432 ft: 15512 corp: 36/75b lim: 5 exec/s: 19 rss: 74Mb L: 4/4 MS: 1 InsertByte- 00:14:38.150 #38 DONE cov: 12432 ft: 15512 corp: 36/75b lim: 5 exec/s: 19 rss: 74Mb 00:14:38.150 Done 38 runs in 2 second(s) 00:14:38.407 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:14:38.407 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:38.407 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:38.407 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:14:38.407 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:38.408 16:38:42 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:14:38.408 [2024-11-05 16:38:42.836327] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:38.408 [2024-11-05 16:38:42.836397] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3523790 ] 00:14:38.666 [2024-11-05 16:38:43.103729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.666 [2024-11-05 16:38:43.151601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.666 [2024-11-05 16:38:43.215540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.666 [2024-11-05 16:38:43.231778] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:14:38.666 INFO: Running with entropic power schedule (0xFF, 100). 00:14:38.666 INFO: Seed: 755467227 00:14:38.923 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:38.923 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:38.923 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:14:38.923 INFO: A corpus is not provided, starting from an empty corpus 00:14:38.923 [2024-11-05 16:38:43.297571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.923 [2024-11-05 16:38:43.297611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.923 #2 INITED cov: 12178 ft: 12177 corp: 1/1b exec/s: 0 rss: 72Mb 00:14:38.923 [2024-11-05 16:38:43.347766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.923 [2024-11-05 16:38:43.347801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:38.923 [2024-11-05 16:38:43.347869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:38.923 [2024-11-05 16:38:43.347889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.488 NEW_FUNC[1/1]: 0x199fd08 in nvme_get_transport /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_transport.c:56 00:14:39.488 #3 NEW cov: 12317 ft: 13347 corp: 2/3b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:14:39.488 [2024-11-05 16:38:43.839457] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:43.839519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.488 [2024-11-05 16:38:43.839610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:43.839638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.488 #4 NEW cov: 12323 ft: 13541 corp: 3/5b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:14:39.488 [2024-11-05 16:38:43.889287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:43.889324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.488 [2024-11-05 16:38:43.889394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:43.889419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.488 #5 NEW cov: 12408 ft: 13767 corp: 4/7b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:14:39.488 [2024-11-05 16:38:43.969518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:43.969553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.488 [2024-11-05 16:38:43.969626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:43.969646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.488 #6 NEW cov: 12408 ft: 13818 corp: 5/9b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:14:39.488 [2024-11-05 16:38:44.019625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:44.019661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.488 [2024-11-05 16:38:44.019739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.488 [2024-11-05 16:38:44.019760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.745 #7 NEW cov: 12408 ft: 14007 corp: 6/11b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ShuffleBytes- 00:14:39.745 [2024-11-05 16:38:44.099900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.745 [2024-11-05 16:38:44.099935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.745 [2024-11-05 16:38:44.100006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.745 [2024-11-05 16:38:44.100027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.745 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:39.745 #8 NEW cov: 12431 ft: 14134 corp: 7/13b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeByte- 00:14:39.745 [2024-11-05 16:38:44.180083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.745 [2024-11-05 16:38:44.180118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.745 [2024-11-05 16:38:44.180188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.746 [2024-11-05 16:38:44.180209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.746 #9 NEW cov: 12431 ft: 14180 corp: 8/15b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:14:39.746 [2024-11-05 16:38:44.230236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.746 [2024-11-05 16:38:44.230270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.746 [2024-11-05 16:38:44.230341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.746 [2024-11-05 16:38:44.230366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:39.746 #10 NEW cov: 12431 ft: 14199 corp: 9/17b lim: 5 exec/s: 10 rss: 73Mb L: 2/2 MS: 1 ChangeBinInt- 00:14:39.746 [2024-11-05 16:38:44.310499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.746 [2024-11-05 16:38:44.310535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:39.746 [2024-11-05 16:38:44.310610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:39.746 [2024-11-05 16:38:44.310630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.003 #11 NEW cov: 12431 ft: 14252 corp: 10/19b lim: 5 exec/s: 11 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:14:40.003 [2024-11-05 16:38:44.360397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.003 [2024-11-05 16:38:44.360431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.003 #12 NEW cov: 12431 ft: 14310 corp: 11/20b lim: 5 exec/s: 12 rss: 73Mb L: 1/2 MS: 1 EraseBytes- 00:14:40.003 [2024-11-05 16:38:44.440573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.003 [2024-11-05 16:38:44.440608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.003 #13 NEW cov: 12431 ft: 14331 corp: 12/21b lim: 5 exec/s: 13 rss: 73Mb L: 1/2 MS: 1 ChangeBit- 00:14:40.003 [2024-11-05 16:38:44.490946] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.003 [2024-11-05 16:38:44.490980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.003 [2024-11-05 16:38:44.491053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.003 [2024-11-05 16:38:44.491073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.003 #14 NEW cov: 12431 ft: 14355 corp: 13/23b lim: 5 exec/s: 14 rss: 73Mb L: 2/2 MS: 1 ChangeBit- 00:14:40.003 [2024-11-05 16:38:44.541636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.003 [2024-11-05 16:38:44.541670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.003 [2024-11-05 16:38:44.541749] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.003 [2024-11-05 16:38:44.541770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.004 [2024-11-05 16:38:44.541836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.004 [2024-11-05 16:38:44.541855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:40.004 [2024-11-05 16:38:44.541922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.004 [2024-11-05 16:38:44.541941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:40.004 [2024-11-05 16:38:44.542015] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.004 [2024-11-05 16:38:44.542034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:40.004 #15 NEW cov: 12431 ft: 14772 corp: 14/28b lim: 5 exec/s: 15 rss: 73Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:14:40.261 [2024-11-05 16:38:44.591800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.591834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.591905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.591925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.591997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.592016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.592084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.592103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.592173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.592192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:40.261 #16 NEW cov: 12431 ft: 14795 corp: 15/33b lim: 5 exec/s: 16 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:14:40.261 [2024-11-05 16:38:44.671627] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.671661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.671739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.671759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.671829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.671848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:40.261 #17 NEW cov: 12431 ft: 14957 corp: 16/36b lim: 5 exec/s: 17 rss: 74Mb L: 3/5 MS: 1 EraseBytes- 00:14:40.261 [2024-11-05 16:38:44.751666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.751703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.261 [2024-11-05 16:38:44.751782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.261 [2024-11-05 16:38:44.751806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.261 #18 NEW cov: 12431 ft: 15041 corp: 17/38b lim: 5 exec/s: 18 rss: 74Mb L: 2/5 MS: 1 CopyPart- 00:14:40.262 [2024-11-05 16:38:44.832089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.262 [2024-11-05 16:38:44.832123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.262 [2024-11-05 16:38:44.832195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.262 [2024-11-05 16:38:44.832215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.262 [2024-11-05 16:38:44.832282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.262 [2024-11-05 16:38:44.832301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:40.519 #19 NEW cov: 12431 ft: 15065 corp: 18/41b lim: 5 exec/s: 19 rss: 74Mb L: 3/5 MS: 1 CrossOver- 00:14:40.519 [2024-11-05 16:38:44.911961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:44.911996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.519 #20 NEW cov: 12431 ft: 15090 corp: 19/42b lim: 5 exec/s: 20 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:14:40.519 [2024-11-05 16:38:44.992752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:44.992787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:44.992856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:44.992876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:44.992945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:44.992964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:44.993032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:44.993051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:40.519 #21 NEW cov: 12431 ft: 15139 corp: 20/46b lim: 5 exec/s: 21 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:14:40.519 [2024-11-05 16:38:45.043048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.043082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:45.043155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.043175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:45.043247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.043266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:45.043334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.043353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:45.043426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.043446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:40.519 #22 NEW cov: 12431 ft: 15154 corp: 21/51b lim: 5 exec/s: 22 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:14:40.519 [2024-11-05 16:38:45.092614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.092649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.519 [2024-11-05 16:38:45.092724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.519 [2024-11-05 16:38:45.092744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.777 #23 NEW cov: 12431 ft: 15165 corp: 22/53b lim: 5 exec/s: 23 rss: 74Mb L: 2/5 MS: 1 ChangeByte- 00:14:40.777 [2024-11-05 16:38:45.142584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.777 [2024-11-05 16:38:45.142618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.777 #24 NEW cov: 12431 ft: 15235 corp: 23/54b lim: 5 exec/s: 24 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:14:40.777 [2024-11-05 16:38:45.192730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.777 [2024-11-05 16:38:45.192765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.777 #25 NEW cov: 12431 ft: 15266 corp: 24/55b lim: 5 exec/s: 25 rss: 74Mb L: 1/5 MS: 1 EraseBytes- 00:14:40.777 [2024-11-05 16:38:45.243047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.777 [2024-11-05 16:38:45.243081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:40.777 [2024-11-05 16:38:45.243151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:40.777 [2024-11-05 16:38:45.243170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:40.777 #26 NEW cov: 12431 ft: 15267 corp: 25/57b lim: 5 exec/s: 13 rss: 74Mb L: 2/5 MS: 1 ChangeBit- 00:14:40.777 #26 DONE cov: 12431 ft: 15267 corp: 25/57b lim: 5 exec/s: 13 rss: 74Mb 00:14:40.777 Done 26 runs in 2 second(s) 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:41.034 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:41.035 16:38:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:14:41.035 [2024-11-05 16:38:45.487250] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:41.035 [2024-11-05 16:38:45.487325] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524119 ] 00:14:41.292 [2024-11-05 16:38:45.759046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.292 [2024-11-05 16:38:45.806570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.292 [2024-11-05 16:38:45.870547] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.550 [2024-11-05 16:38:45.886791] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:14:41.550 INFO: Running with entropic power schedule (0xFF, 100). 00:14:41.550 INFO: Seed: 3409468539 00:14:41.550 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:41.550 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:41.550 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:14:41.550 INFO: A corpus is not provided, starting from an empty corpus 00:14:41.550 #2 INITED exec/s: 0 rss: 66Mb 00:14:41.550 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:41.550 This may also happen if the target rejected all inputs we tried so far 00:14:41.550 [2024-11-05 16:38:45.935679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.550 [2024-11-05 16:38:45.935708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:41.550 [2024-11-05 16:38:45.935775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.550 [2024-11-05 16:38:45.935789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:41.808 NEW_FUNC[1/715]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:14:41.808 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:41.808 #6 NEW cov: 12224 ft: 12225 corp: 2/17b lim: 40 exec/s: 0 rss: 73Mb L: 16/16 MS: 4 ChangeByte-ShuffleBytes-ChangeByte-InsertRepeatedBytes- 00:14:41.808 [2024-11-05 16:38:46.256468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.808 [2024-11-05 16:38:46.256505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:41.808 [2024-11-05 16:38:46.256567] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.808 [2024-11-05 16:38:46.256580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:41.808 #7 NEW cov: 12341 ft: 12705 corp: 3/34b lim: 40 exec/s: 0 rss: 73Mb L: 17/17 MS: 1 InsertByte- 00:14:41.808 [2024-11-05 16:38:46.316389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.808 [2024-11-05 16:38:46.316417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:41.808 #8 NEW cov: 12347 ft: 13339 corp: 4/48b lim: 40 exec/s: 0 rss: 73Mb L: 14/17 MS: 1 EraseBytes- 00:14:41.808 [2024-11-05 16:38:46.376799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.808 [2024-11-05 16:38:46.376825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:41.808 [2024-11-05 16:38:46.376886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.808 [2024-11-05 16:38:46.376901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:41.808 [2024-11-05 16:38:46.376960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:41.808 [2024-11-05 16:38:46.376973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.065 #9 NEW cov: 12432 ft: 13791 corp: 5/79b lim: 40 exec/s: 0 rss: 73Mb L: 31/31 MS: 1 InsertRepeatedBytes- 00:14:42.065 [2024-11-05 16:38:46.417060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.417085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.417145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f915f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.417159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.417218] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.417231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.417288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00005f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.417305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:42.066 #10 NEW cov: 12432 ft: 14286 corp: 6/111b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertByte- 00:14:42.066 [2024-11-05 16:38:46.476804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5ffc5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.476830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.066 #11 NEW cov: 12432 ft: 14439 corp: 7/125b lim: 40 exec/s: 0 rss: 73Mb L: 14/32 MS: 1 CopyPart- 00:14:42.066 [2024-11-05 16:38:46.536967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5ffc5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.536993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.066 #12 NEW cov: 12432 ft: 14529 corp: 8/139b lim: 40 exec/s: 0 rss: 73Mb L: 14/32 MS: 1 ShuffleBytes- 00:14:42.066 [2024-11-05 16:38:46.597279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.597305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.597362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f60 cdw11:5f5f5f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.597376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.066 #13 NEW cov: 12432 ft: 14595 corp: 9/155b lim: 40 exec/s: 0 rss: 73Mb L: 16/32 MS: 1 ChangeBinInt- 00:14:42.066 [2024-11-05 16:38:46.637693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.637725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.637785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f915f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.637799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.637852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00005f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.637866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.066 [2024-11-05 16:38:46.637923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00005f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.066 [2024-11-05 16:38:46.637936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:42.323 #14 NEW cov: 12432 ft: 14653 corp: 10/187b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 ShuffleBytes- 00:14:42.323 [2024-11-05 16:38:46.697727] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.323 [2024-11-05 16:38:46.697754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.323 [2024-11-05 16:38:46.697813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.697827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.324 [2024-11-05 16:38:46.697889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.697902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.324 #15 NEW cov: 12432 ft: 14706 corp: 11/218b lim: 40 exec/s: 0 rss: 73Mb L: 31/32 MS: 1 ChangeBit- 00:14:42.324 [2024-11-05 16:38:46.737669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.737696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.324 [2024-11-05 16:38:46.737770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:00000010 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.737786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.324 #16 NEW cov: 12432 ft: 14756 corp: 12/234b lim: 40 exec/s: 0 rss: 73Mb L: 16/32 MS: 1 ChangeBinInt- 00:14:42.324 [2024-11-05 16:38:46.777646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.777673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.324 #17 NEW cov: 12432 ft: 14761 corp: 13/248b lim: 40 exec/s: 0 rss: 73Mb L: 14/32 MS: 1 ShuffleBytes- 00:14:42.324 [2024-11-05 16:38:46.818054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.818079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.324 [2024-11-05 16:38:46.818136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.818150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.324 [2024-11-05 16:38:46.818213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.818233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.324 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:42.324 #18 NEW cov: 12455 ft: 14807 corp: 14/279b lim: 40 exec/s: 0 rss: 74Mb L: 31/32 MS: 1 ChangeBinInt- 00:14:42.324 [2024-11-05 16:38:46.878205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:325f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.878230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.324 [2024-11-05 16:38:46.878288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.878303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.324 [2024-11-05 16:38:46.878359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.324 [2024-11-05 16:38:46.878372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.324 #24 NEW cov: 12455 ft: 14829 corp: 15/310b lim: 40 exec/s: 0 rss: 74Mb L: 31/32 MS: 1 ChangeByte- 00:14:42.582 [2024-11-05 16:38:46.918319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:46.918344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:46.918405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:46.918418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:46.918475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00005ffc cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:46.918489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.582 #25 NEW cov: 12455 ft: 14844 corp: 16/339b lim: 40 exec/s: 25 rss: 74Mb L: 29/32 MS: 1 InsertRepeatedBytes- 00:14:42.582 [2024-11-05 16:38:46.978318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:46.978344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:46.978420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:46.978435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.582 #26 NEW cov: 12455 ft: 14925 corp: 17/356b lim: 40 exec/s: 26 rss: 74Mb L: 17/32 MS: 1 CopyPart- 00:14:42.582 [2024-11-05 16:38:47.018731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01087575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.018757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.018831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.018845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.018901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.018915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.018973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757527 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.018986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:42.582 #29 NEW cov: 12455 ft: 14941 corp: 18/388b lim: 40 exec/s: 29 rss: 74Mb L: 32/32 MS: 3 ChangeByte-CMP-InsertRepeatedBytes- DE: "\001\010"- 00:14:42.582 [2024-11-05 16:38:47.058872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:01087575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.058898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.058974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:75757575 cdw11:00207575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.058992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.059052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:75757575 cdw11:75757575 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.059066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.059127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:75757575 cdw11:75757527 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.059140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:42.582 #30 NEW cov: 12455 ft: 14953 corp: 19/420b lim: 40 exec/s: 30 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:14:42.582 [2024-11-05 16:38:47.118920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.118947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.119008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.119022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.119089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.119110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.582 #31 NEW cov: 12455 ft: 15004 corp: 20/451b lim: 40 exec/s: 31 rss: 74Mb L: 31/32 MS: 1 ShuffleBytes- 00:14:42.582 [2024-11-05 16:38:47.159029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.159053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.159129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f625f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.159144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.582 [2024-11-05 16:38:47.159204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.582 [2024-11-05 16:38:47.159218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.840 #32 NEW cov: 12455 ft: 15007 corp: 21/482b lim: 40 exec/s: 32 rss: 74Mb L: 31/32 MS: 1 ChangeBinInt- 00:14:42.840 [2024-11-05 16:38:47.219053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.219078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.219155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f2c5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.219170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.840 #33 NEW cov: 12455 ft: 15041 corp: 22/499b lim: 40 exec/s: 33 rss: 74Mb L: 17/32 MS: 1 ChangeByte- 00:14:42.840 [2024-11-05 16:38:47.279489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.279514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.279572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.279586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.279641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.279655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.279717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.279731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:42.840 #34 NEW cov: 12455 ft: 15045 corp: 23/533b lim: 40 exec/s: 34 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:14:42.840 [2024-11-05 16:38:47.319556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.319582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.319658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.319673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.319730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:f1f1f100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.319745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:42.840 [2024-11-05 16:38:47.319804] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.319818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:42.840 #35 NEW cov: 12455 ft: 15059 corp: 24/567b lim: 40 exec/s: 35 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:14:42.840 [2024-11-05 16:38:47.359432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.840 [2024-11-05 16:38:47.359458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.841 [2024-11-05 16:38:47.359534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:000000ef SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.841 [2024-11-05 16:38:47.359549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:42.841 #36 NEW cov: 12455 ft: 15093 corp: 25/583b lim: 40 exec/s: 36 rss: 74Mb L: 16/34 MS: 1 ChangeBinInt- 00:14:42.841 [2024-11-05 16:38:47.419599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.841 [2024-11-05 16:38:47.419624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:42.841 [2024-11-05 16:38:47.419702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f2c5d cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:42.841 [2024-11-05 16:38:47.419721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.098 #37 NEW cov: 12455 ft: 15099 corp: 26/601b lim: 40 exec/s: 37 rss: 74Mb L: 18/34 MS: 1 InsertByte- 00:14:43.098 [2024-11-05 16:38:47.479789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.479814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.479888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:00310000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.479902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.098 #38 NEW cov: 12455 ft: 15141 corp: 27/618b lim: 40 exec/s: 38 rss: 74Mb L: 17/34 MS: 1 InsertByte- 00:14:43.098 [2024-11-05 16:38:47.520166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.520191] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.520249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f91ff06 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.520264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.520321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00005f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.520335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.520391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00005f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.520404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:43.098 #39 NEW cov: 12455 ft: 15144 corp: 28/650b lim: 40 exec/s: 39 rss: 74Mb L: 32/34 MS: 1 CMP- DE: "\377\006"- 00:14:43.098 [2024-11-05 16:38:47.580189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.580214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.580289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:1f000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.580305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.580364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:01000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.580378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:43.098 #40 NEW cov: 12455 ft: 15157 corp: 29/681b lim: 40 exec/s: 40 rss: 74Mb L: 31/34 MS: 1 ChangeBinInt- 00:14:43.098 [2024-11-05 16:38:47.620335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.620363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.620439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f625f cdw11:005f5f00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.620453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.620509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:005f0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.620523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:43.098 #41 NEW cov: 12455 ft: 15186 corp: 30/712b lim: 40 exec/s: 41 rss: 74Mb L: 31/34 MS: 1 ShuffleBytes- 00:14:43.098 [2024-11-05 16:38:47.680370] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.680395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.098 [2024-11-05 16:38:47.680469] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:00010810 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.098 [2024-11-05 16:38:47.680484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.356 #42 NEW cov: 12455 ft: 15205 corp: 31/728b lim: 40 exec/s: 42 rss: 74Mb L: 16/34 MS: 1 PersAutoDict- DE: "\001\010"- 00:14:43.356 [2024-11-05 16:38:47.720533] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.720559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.720635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f0c5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.720649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.356 #43 NEW cov: 12455 ft: 15222 corp: 32/745b lim: 40 exec/s: 43 rss: 74Mb L: 17/34 MS: 1 ChangeBit- 00:14:43.356 [2024-11-05 16:38:47.760596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f9c cdw11:cb1e8d33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.760622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.760682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9f3a005f cdw11:5f5f5f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.760695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.356 #44 NEW cov: 12455 ft: 15245 corp: 33/761b lim: 40 exec/s: 44 rss: 74Mb L: 16/34 MS: 1 CMP- DE: "\234\313\036\2153\237:\000"- 00:14:43.356 [2024-11-05 16:38:47.801137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.801162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.801235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f91ff06 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.801253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.801310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00005f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.801324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.801382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00005f9c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.801396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.801456] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:cb1e8d33 cdw11:9f3a0026 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.801469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:14:43.356 #45 NEW cov: 12455 ft: 15302 corp: 34/801b lim: 40 exec/s: 45 rss: 74Mb L: 40/40 MS: 1 PersAutoDict- DE: "\234\313\036\2153\237:\000"- 00:14:43.356 [2024-11-05 16:38:47.860872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f5f cdw11:fc5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.860898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.860956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:5f5f5f5f cdw11:5f5f5f5f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.860971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.356 #46 NEW cov: 12455 ft: 15313 corp: 35/820b lim: 40 exec/s: 46 rss: 74Mb L: 19/40 MS: 1 CMP- DE: "\012\000"- 00:14:43.356 [2024-11-05 16:38:47.900997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:5f5f5f9c cdw11:cb0e8d33 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.901024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:43.356 [2024-11-05 16:38:47.901099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:9f3a005f cdw11:5f5f5f26 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:43.356 [2024-11-05 16:38:47.901114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:43.613 #47 NEW cov: 12455 ft: 15314 corp: 36/836b lim: 40 exec/s: 23 rss: 74Mb L: 16/40 MS: 1 ChangeBit- 00:14:43.613 #47 DONE cov: 12455 ft: 15314 corp: 36/836b lim: 40 exec/s: 23 rss: 74Mb 00:14:43.613 ###### Recommended dictionary. ###### 00:14:43.613 "\001\010" # Uses: 1 00:14:43.613 "\377\006" # Uses: 0 00:14:43.613 "\234\313\036\2153\237:\000" # Uses: 1 00:14:43.613 "\012\000" # Uses: 0 00:14:43.613 ###### End of recommended dictionary. ###### 00:14:43.613 Done 47 runs in 2 second(s) 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:43.613 16:38:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:14:43.614 [2024-11-05 16:38:48.121487] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:43.614 [2024-11-05 16:38:48.121559] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524441 ] 00:14:43.871 [2024-11-05 16:38:48.389860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.871 [2024-11-05 16:38:48.437870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.129 [2024-11-05 16:38:48.501948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.129 [2024-11-05 16:38:48.518186] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:14:44.129 INFO: Running with entropic power schedule (0xFF, 100). 00:14:44.129 INFO: Seed: 1746500777 00:14:44.129 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:44.129 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:44.129 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:14:44.129 INFO: A corpus is not provided, starting from an empty corpus 00:14:44.129 #2 INITED exec/s: 0 rss: 66Mb 00:14:44.129 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:44.130 This may also happen if the target rejected all inputs we tried so far 00:14:44.130 [2024-11-05 16:38:48.567732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.130 [2024-11-05 16:38:48.567762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.130 [2024-11-05 16:38:48.567836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.130 [2024-11-05 16:38:48.567851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.387 NEW_FUNC[1/716]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:14:44.387 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:44.387 #10 NEW cov: 12240 ft: 12237 corp: 2/21b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 3 CrossOver-EraseBytes-InsertRepeatedBytes- 00:14:44.387 [2024-11-05 16:38:48.888575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.387 [2024-11-05 16:38:48.888616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.387 [2024-11-05 16:38:48.888676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.387 [2024-11-05 16:38:48.888691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.387 #11 NEW cov: 12353 ft: 12557 corp: 3/41b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeBit- 00:14:44.387 [2024-11-05 16:38:48.948480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.388 [2024-11-05 16:38:48.948506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.646 #14 NEW cov: 12359 ft: 13607 corp: 4/53b lim: 40 exec/s: 0 rss: 73Mb L: 12/20 MS: 3 CopyPart-EraseBytes-InsertRepeatedBytes- 00:14:44.646 [2024-11-05 16:38:48.988684] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e21e1d1d cdw11:1de2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.646 [2024-11-05 16:38:48.988710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.646 [2024-11-05 16:38:48.988776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.646 [2024-11-05 16:38:48.988790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.646 #15 NEW cov: 12444 ft: 13894 corp: 5/73b lim: 40 exec/s: 0 rss: 73Mb L: 20/20 MS: 1 ChangeBinInt- 00:14:44.646 [2024-11-05 16:38:49.048898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e21e1d1d cdw11:1de2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.646 [2024-11-05 16:38:49.048923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.646 [2024-11-05 16:38:49.048984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.646 [2024-11-05 16:38:49.048998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.646 #16 NEW cov: 12444 ft: 14004 corp: 6/94b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertByte- 00:14:44.646 [2024-11-05 16:38:49.109061] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:c2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.109086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.647 [2024-11-05 16:38:49.109164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.109179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.647 #17 NEW cov: 12444 ft: 14214 corp: 7/114b lim: 40 exec/s: 0 rss: 73Mb L: 20/21 MS: 1 ChangeBit- 00:14:44.647 [2024-11-05 16:38:49.149042] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.149067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.647 #18 NEW cov: 12444 ft: 14283 corp: 8/127b lim: 40 exec/s: 0 rss: 73Mb L: 13/21 MS: 1 CrossOver- 00:14:44.647 [2024-11-05 16:38:49.189308] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.189336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.647 [2024-11-05 16:38:49.189399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.189413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.647 #19 NEW cov: 12444 ft: 14341 corp: 9/148b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 InsertByte- 00:14:44.647 [2024-11-05 16:38:49.229448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.229474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.647 [2024-11-05 16:38:49.229537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e3e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.647 [2024-11-05 16:38:49.229551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.906 #30 NEW cov: 12444 ft: 14416 corp: 10/168b lim: 40 exec/s: 0 rss: 73Mb L: 20/21 MS: 1 ChangeBit- 00:14:44.906 [2024-11-05 16:38:49.269545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.269572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.906 [2024-11-05 16:38:49.269631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.269645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.906 #31 NEW cov: 12444 ft: 14455 corp: 11/189b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ChangeByte- 00:14:44.906 [2024-11-05 16:38:49.329718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e226 cdw11:c2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.329762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.906 [2024-11-05 16:38:49.329821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.329835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.906 #37 NEW cov: 12444 ft: 14471 corp: 12/209b lim: 40 exec/s: 0 rss: 73Mb L: 20/21 MS: 1 ChangeByte- 00:14:44.906 [2024-11-05 16:38:49.389909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e233 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.389935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.906 [2024-11-05 16:38:49.389992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.390006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:44.906 #38 NEW cov: 12444 ft: 14509 corp: 13/230b lim: 40 exec/s: 0 rss: 73Mb L: 21/21 MS: 1 ChangeASCIIInt- 00:14:44.906 [2024-11-05 16:38:49.450025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e226 cdw11:c2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.450052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:44.906 [2024-11-05 16:38:49.450113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:44.906 [2024-11-05 16:38:49.450128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.164 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:45.164 #39 NEW cov: 12467 ft: 14554 corp: 14/251b lim: 40 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 InsertByte- 00:14:45.164 [2024-11-05 16:38:49.510226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.510252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.164 [2024-11-05 16:38:49.510313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.510327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.164 #40 NEW cov: 12467 ft: 14583 corp: 15/272b lim: 40 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 ChangeBinInt- 00:14:45.164 [2024-11-05 16:38:49.550177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.550203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.164 #51 NEW cov: 12467 ft: 14586 corp: 16/282b lim: 40 exec/s: 51 rss: 74Mb L: 10/21 MS: 1 EraseBytes- 00:14:45.164 [2024-11-05 16:38:49.590402] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.590428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.164 [2024-11-05 16:38:49.590507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.590522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.164 #52 NEW cov: 12467 ft: 14603 corp: 17/303b lim: 40 exec/s: 52 rss: 74Mb L: 21/21 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:14:45.164 [2024-11-05 16:38:49.650659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e226 cdw11:c2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.650685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.164 [2024-11-05 16:38:49.650766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e200e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.650781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.164 #53 NEW cov: 12467 ft: 14636 corp: 18/324b lim: 40 exec/s: 53 rss: 74Mb L: 21/21 MS: 1 CrossOver- 00:14:45.164 [2024-11-05 16:38:49.711173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:32e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.711198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.164 [2024-11-05 16:38:49.711272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.711286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.164 [2024-11-05 16:38:49.711358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.711377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:45.164 [2024-11-05 16:38:49.711436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.164 [2024-11-05 16:38:49.711452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:45.423 #54 NEW cov: 12467 ft: 14989 corp: 19/361b lim: 40 exec/s: 54 rss: 74Mb L: 37/37 MS: 1 CopyPart- 00:14:45.423 [2024-11-05 16:38:49.771006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2c2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.771033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.423 [2024-11-05 16:38:49.771092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.771108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.423 #55 NEW cov: 12467 ft: 15014 corp: 20/381b lim: 40 exec/s: 55 rss: 74Mb L: 20/37 MS: 1 ShuffleBytes- 00:14:45.423 [2024-11-05 16:38:49.811102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.811130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.423 [2024-11-05 16:38:49.811208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.811225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.423 #56 NEW cov: 12467 ft: 15031 corp: 21/402b lim: 40 exec/s: 56 rss: 74Mb L: 21/37 MS: 1 ShuffleBytes- 00:14:45.423 [2024-11-05 16:38:49.851200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e2e216 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.851225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.423 [2024-11-05 16:38:49.851303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.851318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.423 #57 NEW cov: 12467 ft: 15034 corp: 22/423b lim: 40 exec/s: 57 rss: 74Mb L: 21/37 MS: 1 ChangeBinInt- 00:14:45.423 [2024-11-05 16:38:49.891295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2c2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.891320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.423 [2024-11-05 16:38:49.891397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e262e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.891411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.423 #58 NEW cov: 12467 ft: 15087 corp: 23/443b lim: 40 exec/s: 58 rss: 74Mb L: 20/37 MS: 1 ChangeBit- 00:14:45.423 [2024-11-05 16:38:49.951678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.951707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.423 [2024-11-05 16:38:49.951772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:000000e2 cdw11:16e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.951787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.423 [2024-11-05 16:38:49.951844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.423 [2024-11-05 16:38:49.951858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:45.423 #59 NEW cov: 12467 ft: 15281 corp: 24/469b lim: 40 exec/s: 59 rss: 74Mb L: 26/37 MS: 1 InsertRepeatedBytes- 00:14:45.682 [2024-11-05 16:38:50.011673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.011700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.682 [2024-11-05 16:38:50.011758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.011774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.682 #60 NEW cov: 12467 ft: 15309 corp: 25/489b lim: 40 exec/s: 60 rss: 74Mb L: 20/37 MS: 1 EraseBytes- 00:14:45.682 [2024-11-05 16:38:50.071887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.071921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.682 [2024-11-05 16:38:50.071982] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00000010 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.071997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.682 #61 NEW cov: 12467 ft: 15315 corp: 26/510b lim: 40 exec/s: 61 rss: 74Mb L: 21/37 MS: 1 ChangeByte- 00:14:45.682 [2024-11-05 16:38:50.131976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.132004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.682 [2024-11-05 16:38:50.132079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e2e2e2e2 cdw11:24e3e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.132094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.682 #62 NEW cov: 12467 ft: 15341 corp: 27/530b lim: 40 exec/s: 62 rss: 74Mb L: 20/37 MS: 1 ChangeByte- 00:14:45.682 [2024-11-05 16:38:50.192217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e226 cdw11:c2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.192242] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.682 [2024-11-05 16:38:50.192298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e200e2 cdw11:e20015e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.192313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.682 #63 NEW cov: 12467 ft: 15351 corp: 28/551b lim: 40 exec/s: 63 rss: 74Mb L: 21/37 MS: 1 ChangeBinInt- 00:14:45.682 [2024-11-05 16:38:50.252719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.682 [2024-11-05 16:38:50.252744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.683 [2024-11-05 16:38:50.252822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:04040404 cdw11:04040404 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.683 [2024-11-05 16:38:50.252836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.683 [2024-11-05 16:38:50.252891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:04040404 cdw11:04040404 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.683 [2024-11-05 16:38:50.252905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:45.683 [2024-11-05 16:38:50.252962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:04040404 cdw11:04040404 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.683 [2024-11-05 16:38:50.252976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:45.942 #64 NEW cov: 12467 ft: 15364 corp: 29/590b lim: 40 exec/s: 64 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:14:45.942 [2024-11-05 16:38:50.292441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e232 cdw11:e2e20000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.292467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.942 [2024-11-05 16:38:50.292530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:00e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.292544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.942 #65 NEW cov: 12467 ft: 15391 corp: 30/611b lim: 40 exec/s: 65 rss: 74Mb L: 21/39 MS: 1 EraseBytes- 00:14:45.942 [2024-11-05 16:38:50.352487] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.352513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.942 #66 NEW cov: 12467 ft: 15427 corp: 31/623b lim: 40 exec/s: 66 rss: 75Mb L: 12/39 MS: 1 ChangeByte- 00:14:45.942 [2024-11-05 16:38:50.412878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.412904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.942 [2024-11-05 16:38:50.412965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e227e2e2 cdw11:24e3e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.412979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:45.942 #67 NEW cov: 12467 ft: 15452 corp: 32/643b lim: 40 exec/s: 67 rss: 75Mb L: 20/39 MS: 1 ChangeByte- 00:14:45.942 [2024-11-05 16:38:50.472858] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.472883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:45.942 #68 NEW cov: 12467 ft: 15457 corp: 33/656b lim: 40 exec/s: 68 rss: 75Mb L: 13/39 MS: 1 ShuffleBytes- 00:14:45.942 [2024-11-05 16:38:50.512920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f5f5f5f5 cdw11:f5f5f5f5 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:45.942 [2024-11-05 16:38:50.512946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:46.202 #69 NEW cov: 12467 ft: 15481 corp: 34/670b lim: 40 exec/s: 69 rss: 75Mb L: 14/39 MS: 1 CopyPart- 00:14:46.202 [2024-11-05 16:38:50.553249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:e2e2e226 cdw11:c2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:46.202 [2024-11-05 16:38:50.553275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:46.202 [2024-11-05 16:38:50.553337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:e6e2e2e2 cdw11:e2e2e2e2 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:46.202 [2024-11-05 16:38:50.553352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:46.202 #70 NEW cov: 12467 ft: 15485 corp: 35/691b lim: 40 exec/s: 35 rss: 75Mb L: 21/39 MS: 1 ChangeByte- 00:14:46.202 #70 DONE cov: 12467 ft: 15485 corp: 35/691b lim: 40 exec/s: 35 rss: 75Mb 00:14:46.202 ###### Recommended dictionary. ###### 00:14:46.202 "\000\000\000\000\000\000\000\020" # Uses: 0 00:14:46.202 ###### End of recommended dictionary. ###### 00:14:46.202 Done 70 runs in 2 second(s) 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:46.202 16:38:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:14:46.202 [2024-11-05 16:38:50.754725] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:46.202 [2024-11-05 16:38:50.754796] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3524754 ] 00:14:46.463 [2024-11-05 16:38:51.025516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.722 [2024-11-05 16:38:51.073652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.722 [2024-11-05 16:38:51.137729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:46.722 [2024-11-05 16:38:51.153967] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:14:46.722 INFO: Running with entropic power schedule (0xFF, 100). 00:14:46.722 INFO: Seed: 89532941 00:14:46.722 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:46.722 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:46.722 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:14:46.722 INFO: A corpus is not provided, starting from an empty corpus 00:14:46.722 #2 INITED exec/s: 0 rss: 66Mb 00:14:46.722 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:46.722 This may also happen if the target rejected all inputs we tried so far 00:14:46.722 [2024-11-05 16:38:51.199723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:46.722 [2024-11-05 16:38:51.199769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:46.980 NEW_FUNC[1/716]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:14:46.980 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:46.980 #6 NEW cov: 12238 ft: 12228 corp: 2/12b lim: 40 exec/s: 0 rss: 73Mb L: 11/11 MS: 4 ChangeBit-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:14:46.980 [2024-11-05 16:38:51.520547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:46.980 [2024-11-05 16:38:51.520585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:46.980 #8 NEW cov: 12351 ft: 12704 corp: 3/24b lim: 40 exec/s: 0 rss: 73Mb L: 12/12 MS: 2 EraseBytes-CopyPart- 00:14:47.239 [2024-11-05 16:38:51.580564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:7a494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.580591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.239 #14 NEW cov: 12357 ft: 13071 corp: 4/36b lim: 40 exec/s: 0 rss: 73Mb L: 12/12 MS: 1 ChangeByte- 00:14:47.239 [2024-11-05 16:38:51.640892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.640917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.239 [2024-11-05 16:38:51.640993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.641008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.239 #15 NEW cov: 12442 ft: 14078 corp: 5/59b lim: 40 exec/s: 0 rss: 73Mb L: 23/23 MS: 1 CopyPart- 00:14:47.239 [2024-11-05 16:38:51.681004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.681029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.239 [2024-11-05 16:38:51.681104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:e0e0e0e0 cdw11:e0e0e0e0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.681125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.239 #20 NEW cov: 12442 ft: 14135 corp: 6/77b lim: 40 exec/s: 0 rss: 73Mb L: 18/23 MS: 5 ShuffleBytes-CopyPart-ChangeBinInt-CrossOver-InsertRepeatedBytes- 00:14:47.239 [2024-11-05 16:38:51.721313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.721339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.239 [2024-11-05 16:38:51.721396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.721410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.239 [2024-11-05 16:38:51.721466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef4902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.721479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:47.239 #21 NEW cov: 12442 ft: 14439 corp: 7/101b lim: 40 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 InsertRepeatedBytes- 00:14:47.239 [2024-11-05 16:38:51.761067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.761092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.239 #22 NEW cov: 12442 ft: 14559 corp: 8/111b lim: 40 exec/s: 0 rss: 73Mb L: 10/24 MS: 1 EraseBytes- 00:14:47.239 [2024-11-05 16:38:51.801195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49b7b6b6 cdw11:bd494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.239 [2024-11-05 16:38:51.801220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.239 #23 NEW cov: 12442 ft: 14589 corp: 9/122b lim: 40 exec/s: 0 rss: 73Mb L: 11/24 MS: 1 ChangeBinInt- 00:14:47.499 [2024-11-05 16:38:51.841495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.841520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.499 [2024-11-05 16:38:51.841579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.841593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.499 #24 NEW cov: 12442 ft: 14622 corp: 10/145b lim: 40 exec/s: 0 rss: 73Mb L: 23/24 MS: 1 EraseBytes- 00:14:47.499 [2024-11-05 16:38:51.901660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49495249 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.901686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.499 [2024-11-05 16:38:51.901744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.901759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.499 #25 NEW cov: 12442 ft: 14764 corp: 11/168b lim: 40 exec/s: 0 rss: 73Mb L: 23/24 MS: 1 ChangeByte- 00:14:47.499 [2024-11-05 16:38:51.962002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.962033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.499 [2024-11-05 16:38:51.962090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efffefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.962104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.499 [2024-11-05 16:38:51.962158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef4902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:51.962172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:47.499 #26 NEW cov: 12442 ft: 14833 corp: 12/192b lim: 40 exec/s: 0 rss: 73Mb L: 24/24 MS: 1 ChangeBit- 00:14:47.499 [2024-11-05 16:38:52.001800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:b6494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:52.001827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.499 #27 NEW cov: 12442 ft: 14844 corp: 13/203b lim: 40 exec/s: 0 rss: 73Mb L: 11/24 MS: 1 ChangeBinInt- 00:14:47.499 [2024-11-05 16:38:52.041904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:52.041930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.499 #28 NEW cov: 12442 ft: 14857 corp: 14/215b lim: 40 exec/s: 0 rss: 73Mb L: 12/24 MS: 1 ChangeByte- 00:14:47.499 [2024-11-05 16:38:52.082024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c494949 cdw11:49492c49 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.499 [2024-11-05 16:38:52.082051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.758 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:47.758 #29 NEW cov: 12465 ft: 14935 corp: 15/228b lim: 40 exec/s: 0 rss: 73Mb L: 13/24 MS: 1 CrossOver- 00:14:47.758 [2024-11-05 16:38:52.142586] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:4949493b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.142611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.758 [2024-11-05 16:38:52.142672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:4949efef cdw11:efefffef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.142685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.758 [2024-11-05 16:38:52.142743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efefef49 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.142757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:47.758 #30 NEW cov: 12465 ft: 14983 corp: 16/253b lim: 40 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertByte- 00:14:47.758 [2024-11-05 16:38:52.202723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49495249 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.202751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.758 [2024-11-05 16:38:52.202824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.202843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.758 [2024-11-05 16:38:52.202908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ef000000 cdw11:00efefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.202923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:47.758 #31 NEW cov: 12465 ft: 14992 corp: 17/280b lim: 40 exec/s: 31 rss: 74Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:14:47.758 [2024-11-05 16:38:52.262728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:494949ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.262755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:47.758 [2024-11-05 16:38:52.262826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:efefefff cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.262841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:47.758 #32 NEW cov: 12465 ft: 15009 corp: 18/302b lim: 40 exec/s: 32 rss: 74Mb L: 22/27 MS: 1 EraseBytes- 00:14:47.758 [2024-11-05 16:38:52.302681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:000a4949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:47.758 [2024-11-05 16:38:52.302707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.016 #38 NEW cov: 12465 ft: 15012 corp: 19/312b lim: 40 exec/s: 38 rss: 74Mb L: 10/27 MS: 1 ChangeBinInt- 00:14:48.016 [2024-11-05 16:38:52.363178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.363204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.016 [2024-11-05 16:38:52.363278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.363293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.016 [2024-11-05 16:38:52.363360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.363379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.016 #39 NEW cov: 12465 ft: 15031 corp: 20/339b lim: 40 exec/s: 39 rss: 74Mb L: 27/27 MS: 1 CMP- DE: "\377\377\377\377"- 00:14:48.016 [2024-11-05 16:38:52.422977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:494949ff cdw11:ffffff49 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.423004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.016 #40 NEW cov: 12465 ft: 15032 corp: 21/350b lim: 40 exec/s: 40 rss: 74Mb L: 11/27 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:14:48.016 [2024-11-05 16:38:52.483173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49b64949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.483199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.016 #41 NEW cov: 12465 ft: 15039 corp: 22/361b lim: 40 exec/s: 41 rss: 74Mb L: 11/27 MS: 1 ShuffleBytes- 00:14:48.016 [2024-11-05 16:38:52.523268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49ffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.523297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.016 #43 NEW cov: 12465 ft: 15059 corp: 23/372b lim: 40 exec/s: 43 rss: 74Mb L: 11/27 MS: 2 EraseBytes-PersAutoDict- DE: "\377\377\377\377"- 00:14:48.016 [2024-11-05 16:38:52.563401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:494949ff cdw11:f5ffff49 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.016 [2024-11-05 16:38:52.563426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.275 #44 NEW cov: 12465 ft: 15099 corp: 24/383b lim: 40 exec/s: 44 rss: 74Mb L: 11/27 MS: 1 ChangeBinInt- 00:14:48.275 [2024-11-05 16:38:52.624122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:494949ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.624147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.624222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:efefefff cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.624236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.624302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ef494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.624321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.624382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.624398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:48.275 #45 NEW cov: 12465 ft: 15415 corp: 25/422b lim: 40 exec/s: 45 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:14:48.275 [2024-11-05 16:38:52.684101] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:45494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.684127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.684184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.684199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.684256] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efef4902 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.684270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.275 #46 NEW cov: 12465 ft: 15427 corp: 26/446b lim: 40 exec/s: 46 rss: 74Mb L: 24/39 MS: 1 ChangeBinInt- 00:14:48.275 [2024-11-05 16:38:52.724263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.724288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.724362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.724377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.275 [2024-11-05 16:38:52.724437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff49ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.724450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.275 #47 NEW cov: 12465 ft: 15503 corp: 27/474b lim: 40 exec/s: 47 rss: 74Mb L: 28/39 MS: 1 InsertRepeatedBytes- 00:14:48.275 [2024-11-05 16:38:52.784213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:494949ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.275 [2024-11-05 16:38:52.784237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.276 [2024-11-05 16:38:52.784311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:efefefff cdw11:efef32ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.276 [2024-11-05 16:38:52.784326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.276 #48 NEW cov: 12465 ft: 15509 corp: 28/497b lim: 40 exec/s: 48 rss: 74Mb L: 23/39 MS: 1 InsertByte- 00:14:48.276 [2024-11-05 16:38:52.824510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:000a4949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.276 [2024-11-05 16:38:52.824535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.276 [2024-11-05 16:38:52.824611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:52494949 cdw11:494949ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.276 [2024-11-05 16:38:52.824626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.276 [2024-11-05 16:38:52.824683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.276 [2024-11-05 16:38:52.824697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.534 #50 NEW cov: 12465 ft: 15516 corp: 29/526b lim: 40 exec/s: 50 rss: 74Mb L: 29/39 MS: 2 EraseBytes-CrossOver- 00:14:48.534 [2024-11-05 16:38:52.884521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49fff5ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.884547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:52.884606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:ff494949 cdw11:49000a49 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.884620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.534 #51 NEW cov: 12465 ft: 15536 corp: 30/547b lim: 40 exec/s: 51 rss: 74Mb L: 21/39 MS: 1 CrossOver- 00:14:48.534 [2024-11-05 16:38:52.924415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:000a4949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.924441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.534 #52 NEW cov: 12465 ft: 15557 corp: 31/561b lim: 40 exec/s: 52 rss: 74Mb L: 14/39 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:14:48.534 [2024-11-05 16:38:52.965146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49495249 cdw11:49496060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.965172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:52.965247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:60606060 cdw11:60606060 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.965262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:52.965319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:60604949 cdw11:49efefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.965333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:52.965389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:efefefef cdw11:ef000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:52.965403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:48.534 #53 NEW cov: 12465 ft: 15581 corp: 32/600b lim: 40 exec/s: 53 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:14:48.534 [2024-11-05 16:38:53.025122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:45494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:53.025147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:53.025221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:53.025236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:53.025292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efefefef cdw11:ef99ef49 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:53.025306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.534 #54 NEW cov: 12465 ft: 15615 corp: 33/625b lim: 40 exec/s: 54 rss: 74Mb L: 25/39 MS: 1 InsertByte- 00:14:48.534 [2024-11-05 16:38:53.085090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:494949ef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:53.085116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.534 [2024-11-05 16:38:53.085191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:efefefff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.534 [2024-11-05 16:38:53.085206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.534 #55 NEW cov: 12465 ft: 15626 corp: 34/647b lim: 40 exec/s: 55 rss: 74Mb L: 22/39 MS: 1 ChangeBinInt- 00:14:48.792 [2024-11-05 16:38:53.125389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494949 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.792 [2024-11-05 16:38:53.125415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.792 [2024-11-05 16:38:53.125474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:49efefef cdw11:efffefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.792 [2024-11-05 16:38:53.125488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.792 [2024-11-05 16:38:53.125547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:efffffff cdw11:ffefefef SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.793 [2024-11-05 16:38:53.125561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:48.793 #56 NEW cov: 12465 ft: 15663 corp: 35/675b lim: 40 exec/s: 56 rss: 74Mb L: 28/39 MS: 1 PersAutoDict- DE: "\377\377\377\377"- 00:14:48.793 [2024-11-05 16:38:53.165318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:49494949 cdw11:49494900 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.793 [2024-11-05 16:38:53.165343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:48.793 [2024-11-05 16:38:53.165417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:48.793 [2024-11-05 16:38:53.165431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:48.793 #57 NEW cov: 12465 ft: 15733 corp: 36/694b lim: 40 exec/s: 28 rss: 74Mb L: 19/39 MS: 1 InsertRepeatedBytes- 00:14:48.793 #57 DONE cov: 12465 ft: 15733 corp: 36/694b lim: 40 exec/s: 28 rss: 74Mb 00:14:48.793 ###### Recommended dictionary. ###### 00:14:48.793 "\377\377\377\377" # Uses: 4 00:14:48.793 ###### End of recommended dictionary. ###### 00:14:48.793 Done 57 runs in 2 second(s) 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:48.793 16:38:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:14:48.793 [2024-11-05 16:38:53.367894] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:48.793 [2024-11-05 16:38:53.367967] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525073 ] 00:14:49.361 [2024-11-05 16:38:53.645108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.362 [2024-11-05 16:38:53.693226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.362 [2024-11-05 16:38:53.757231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.362 [2024-11-05 16:38:53.773475] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:14:49.362 INFO: Running with entropic power schedule (0xFF, 100). 00:14:49.362 INFO: Seed: 2708546751 00:14:49.362 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:49.362 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:49.362 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:14:49.362 INFO: A corpus is not provided, starting from an empty corpus 00:14:49.362 #2 INITED exec/s: 0 rss: 66Mb 00:14:49.362 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:49.362 This may also happen if the target rejected all inputs we tried so far 00:14:49.362 [2024-11-05 16:38:53.839738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.362 [2024-11-05 16:38:53.839778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.362 [2024-11-05 16:38:53.839849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.362 [2024-11-05 16:38:53.839869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.362 [2024-11-05 16:38:53.839940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.362 [2024-11-05 16:38:53.839959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.362 [2024-11-05 16:38:53.840028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.362 [2024-11-05 16:38:53.840047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:49.620 NEW_FUNC[1/715]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:14:49.620 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:49.620 #3 NEW cov: 12207 ft: 12207 corp: 2/33b lim: 40 exec/s: 0 rss: 73Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:14:49.620 [2024-11-05 16:38:54.190885] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.620 [2024-11-05 16:38:54.190948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.620 [2024-11-05 16:38:54.191036] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.620 [2024-11-05 16:38:54.191064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.620 [2024-11-05 16:38:54.191150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.620 [2024-11-05 16:38:54.191176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.620 [2024-11-05 16:38:54.191260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.620 [2024-11-05 16:38:54.191285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:49.879 #4 NEW cov: 12338 ft: 12707 corp: 3/68b lim: 40 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:14:49.879 [2024-11-05 16:38:54.270725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.270762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.270834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.270854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.270924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:5b5b9a5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.270942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.271011] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.271030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:49.879 #5 NEW cov: 12344 ft: 13000 corp: 4/104b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 1 InsertByte- 00:14:49.879 [2024-11-05 16:38:54.350916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.350955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.351025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.351047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.351116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.351136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.351205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.351224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:49.879 #6 NEW cov: 12429 ft: 13306 corp: 5/136b lim: 40 exec/s: 0 rss: 74Mb L: 32/36 MS: 1 CopyPart- 00:14:49.879 [2024-11-05 16:38:54.400905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.400939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.401012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.401032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.401100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.401120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.879 #8 NEW cov: 12429 ft: 13963 corp: 6/167b lim: 40 exec/s: 0 rss: 74Mb L: 31/36 MS: 2 ChangeByte-InsertRepeatedBytes- 00:14:49.879 [2024-11-05 16:38:54.451161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.451195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.451267] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b25 cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.451287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.451354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.451373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:49.879 [2024-11-05 16:38:54.451441] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.879 [2024-11-05 16:38:54.451461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.138 #9 NEW cov: 12429 ft: 14022 corp: 7/199b lim: 40 exec/s: 0 rss: 74Mb L: 32/36 MS: 1 ChangeByte- 00:14:50.138 [2024-11-05 16:38:54.501337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.501372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.501443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.501462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.501531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5bb95b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.501550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.501620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.501638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.138 #15 NEW cov: 12429 ft: 14109 corp: 8/231b lim: 40 exec/s: 0 rss: 74Mb L: 32/36 MS: 1 ChangeByte- 00:14:50.138 [2024-11-05 16:38:54.551442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.551476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.551547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.551567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.551638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:b35b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.551657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.551738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.551758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.138 #16 NEW cov: 12429 ft: 14125 corp: 9/270b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 CopyPart- 00:14:50.138 [2024-11-05 16:38:54.601624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.601658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.601736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.601756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.601827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:b35b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.601846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.601916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.601935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.138 #17 NEW cov: 12429 ft: 14167 corp: 10/309b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 ShuffleBytes- 00:14:50.138 [2024-11-05 16:38:54.681486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.681521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.138 [2024-11-05 16:38:54.681594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b0a5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.138 [2024-11-05 16:38:54.681615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.138 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:50.138 #18 NEW cov: 12452 ft: 14477 corp: 11/326b lim: 40 exec/s: 0 rss: 74Mb L: 17/39 MS: 1 CrossOver- 00:14:50.397 [2024-11-05 16:38:54.741972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.742007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.742079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.742099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.742167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.742186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.742257] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.742281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.397 #19 NEW cov: 12452 ft: 14490 corp: 12/359b lim: 40 exec/s: 0 rss: 74Mb L: 33/39 MS: 1 EraseBytes- 00:14:50.397 [2024-11-05 16:38:54.792129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.792163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.792235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.792255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.792326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.792344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.792413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.792431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.397 #20 NEW cov: 12452 ft: 14508 corp: 13/392b lim: 40 exec/s: 20 rss: 74Mb L: 33/39 MS: 1 ShuffleBytes- 00:14:50.397 [2024-11-05 16:38:54.872051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.872086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.872158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b0a5b cdw11:5b5b5b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.872179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.397 #21 NEW cov: 12452 ft: 14525 corp: 14/409b lim: 40 exec/s: 21 rss: 74Mb L: 17/39 MS: 1 ChangeBinInt- 00:14:50.397 [2024-11-05 16:38:54.952262] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.952296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.397 [2024-11-05 16:38:54.952369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b0a5b cdw11:5b5b0a5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.397 [2024-11-05 16:38:54.952389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.656 #22 NEW cov: 12452 ft: 14559 corp: 15/431b lim: 40 exec/s: 22 rss: 74Mb L: 22/39 MS: 1 CopyPart- 00:14:50.656 [2024-11-05 16:38:55.002698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.002739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.002809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.002829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.002908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:b35b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.002928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.002997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.003016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.656 #23 NEW cov: 12452 ft: 14596 corp: 16/470b lim: 40 exec/s: 23 rss: 74Mb L: 39/39 MS: 1 ChangeBinInt- 00:14:50.656 [2024-11-05 16:38:55.052882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.052916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.052984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.053004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.053075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b545b cdw11:5b5b9a5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.053094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.053161] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.053179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.656 #24 NEW cov: 12452 ft: 14623 corp: 17/506b lim: 40 exec/s: 24 rss: 74Mb L: 36/39 MS: 1 ChangeByte- 00:14:50.656 [2024-11-05 16:38:55.132917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.132954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.133024] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.133044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.133113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.133133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.656 #25 NEW cov: 12452 ft: 14684 corp: 18/537b lim: 40 exec/s: 25 rss: 74Mb L: 31/39 MS: 1 ChangeBit- 00:14:50.656 [2024-11-05 16:38:55.213296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.213329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.213400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.213424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.213494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:5b230000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.213513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.656 [2024-11-05 16:38:55.213581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:005b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.656 [2024-11-05 16:38:55.213600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.914 #26 NEW cov: 12452 ft: 14706 corp: 19/572b lim: 40 exec/s: 26 rss: 74Mb L: 35/39 MS: 1 ChangeBinInt- 00:14:50.914 [2024-11-05 16:38:55.263141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.263175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.914 [2024-11-05 16:38:55.263245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b4a5b cdw11:5b5b5b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.263264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.914 #27 NEW cov: 12452 ft: 14718 corp: 20/589b lim: 40 exec/s: 27 rss: 74Mb L: 17/39 MS: 1 ChangeBit- 00:14:50.914 [2024-11-05 16:38:55.343673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b0400 cdw11:00005b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.343708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.914 [2024-11-05 16:38:55.343788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b25 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.343808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.914 [2024-11-05 16:38:55.343875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.343893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:50.914 [2024-11-05 16:38:55.343962] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.343982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:50.914 #28 NEW cov: 12452 ft: 14733 corp: 21/625b lim: 40 exec/s: 28 rss: 74Mb L: 36/39 MS: 1 CMP- DE: "\004\000\000\000"- 00:14:50.914 [2024-11-05 16:38:55.423608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.423642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:50.914 [2024-11-05 16:38:55.423720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b0a5b cdw11:5b5bf5a4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:50.914 [2024-11-05 16:38:55.423749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:50.914 #29 NEW cov: 12452 ft: 14740 corp: 22/647b lim: 40 exec/s: 29 rss: 74Mb L: 22/39 MS: 1 ChangeBinInt- 00:14:51.173 [2024-11-05 16:38:55.504127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.504161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.504231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5bb3b3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.504251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.504318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:b35b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.504337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.504405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.504425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:51.173 #30 NEW cov: 12452 ft: 14786 corp: 23/682b lim: 40 exec/s: 30 rss: 74Mb L: 35/39 MS: 1 ShuffleBytes- 00:14:51.173 [2024-11-05 16:38:55.553950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.553983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.554055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b0a5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.554075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:51.173 #31 NEW cov: 12452 ft: 14826 corp: 24/699b lim: 40 exec/s: 31 rss: 74Mb L: 17/39 MS: 1 CopyPart- 00:14:51.173 [2024-11-05 16:38:55.604092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.604127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.604210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.604231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:51.173 #32 NEW cov: 12452 ft: 14828 corp: 25/716b lim: 40 exec/s: 32 rss: 75Mb L: 17/39 MS: 1 CrossOver- 00:14:51.173 [2024-11-05 16:38:55.684616] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5b040f cdw11:00005b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.684649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.684726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b25 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.684746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.684816] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.684836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:51.173 [2024-11-05 16:38:55.684910] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.173 [2024-11-05 16:38:55.684929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:51.173 #33 NEW cov: 12452 ft: 14840 corp: 26/752b lim: 40 exec/s: 33 rss: 75Mb L: 36/39 MS: 1 CMP- DE: "\017\000"- 00:14:51.431 [2024-11-05 16:38:55.764820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a5bff39 cdw11:9f37d420 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.431 [2024-11-05 16:38:55.764853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:51.431 [2024-11-05 16:38:55.764922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:754e5b5b cdw11:5b5b5b25 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.431 [2024-11-05 16:38:55.764942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:51.431 [2024-11-05 16:38:55.765010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.431 [2024-11-05 16:38:55.765028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:51.431 [2024-11-05 16:38:55.765097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:5b5b5b5b cdw11:5b5b5b5b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:51.431 [2024-11-05 16:38:55.765116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:51.432 #34 NEW cov: 12452 ft: 14857 corp: 27/788b lim: 40 exec/s: 17 rss: 75Mb L: 36/39 MS: 1 CMP- DE: "\3779\2377\324 uN"- 00:14:51.432 #34 DONE cov: 12452 ft: 14857 corp: 27/788b lim: 40 exec/s: 17 rss: 75Mb 00:14:51.432 ###### Recommended dictionary. ###### 00:14:51.432 "\004\000\000\000" # Uses: 0 00:14:51.432 "\017\000" # Uses: 0 00:14:51.432 "\3779\2377\324 uN" # Uses: 0 00:14:51.432 ###### End of recommended dictionary. ###### 00:14:51.432 Done 34 runs in 2 second(s) 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:51.432 16:38:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:14:51.432 [2024-11-05 16:38:56.004884] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:51.432 [2024-11-05 16:38:56.004967] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525418 ] 00:14:51.998 [2024-11-05 16:38:56.278291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.998 [2024-11-05 16:38:56.326312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.998 [2024-11-05 16:38:56.390426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.998 [2024-11-05 16:38:56.406673] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:14:51.998 INFO: Running with entropic power schedule (0xFF, 100). 00:14:51.998 INFO: Seed: 1044569180 00:14:51.998 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:51.998 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:51.998 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:14:51.998 INFO: A corpus is not provided, starting from an empty corpus 00:14:51.998 #2 INITED exec/s: 0 rss: 66Mb 00:14:51.998 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:51.998 This may also happen if the target rejected all inputs we tried so far 00:14:51.998 [2024-11-05 16:38:56.455572] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:51.998 [2024-11-05 16:38:56.455604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.256 NEW_FUNC[1/717]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:14:52.256 NEW_FUNC[2/717]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:14:52.256 #6 NEW cov: 12227 ft: 12221 corp: 2/11b lim: 35 exec/s: 0 rss: 73Mb L: 10/10 MS: 4 CrossOver-CrossOver-EraseBytes-CMP- DE: "\3779\237825\377\314"- 00:14:52.256 [2024-11-05 16:38:56.776928] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.256 [2024-11-05 16:38:56.776965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.256 [2024-11-05 16:38:56.777027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.256 [2024-11-05 16:38:56.777041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:52.256 [2024-11-05 16:38:56.777101] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.256 [2024-11-05 16:38:56.777115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:52.256 #15 NEW cov: 12350 ft: 13505 corp: 3/36b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 4 CopyPart-ChangeByte-CrossOver-InsertRepeatedBytes- 00:14:52.256 [2024-11-05 16:38:56.826542] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.256 [2024-11-05 16:38:56.826572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.515 #16 NEW cov: 12356 ft: 13868 corp: 4/46b lim: 35 exec/s: 0 rss: 73Mb L: 10/25 MS: 1 ShuffleBytes- 00:14:52.515 [2024-11-05 16:38:56.886731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:56.886760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.515 #17 NEW cov: 12441 ft: 14109 corp: 5/56b lim: 35 exec/s: 0 rss: 73Mb L: 10/25 MS: 1 ChangeByte- 00:14:52.515 [2024-11-05 16:38:56.926861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:56.926888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.515 #18 NEW cov: 12441 ft: 14208 corp: 6/66b lim: 35 exec/s: 0 rss: 73Mb L: 10/25 MS: 1 CrossOver- 00:14:52.515 [2024-11-05 16:38:56.986991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:56.987018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.515 #19 NEW cov: 12441 ft: 14309 corp: 7/76b lim: 35 exec/s: 0 rss: 73Mb L: 10/25 MS: 1 ShuffleBytes- 00:14:52.515 [2024-11-05 16:38:57.027511] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:57.027539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.515 [2024-11-05 16:38:57.027602] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:57.027616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:52.515 [2024-11-05 16:38:57.027682] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:57.027703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:52.515 #25 NEW cov: 12441 ft: 14372 corp: 8/101b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 ShuffleBytes- 00:14:52.515 [2024-11-05 16:38:57.087492] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:57.087519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.515 [2024-11-05 16:38:57.087585] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.515 [2024-11-05 16:38:57.087599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:52.773 #26 NEW cov: 12441 ft: 14595 corp: 9/119b lim: 35 exec/s: 0 rss: 73Mb L: 18/25 MS: 1 PersAutoDict- DE: "\3779\237825\377\314"- 00:14:52.773 [2024-11-05 16:38:57.127388] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.773 [2024-11-05 16:38:57.127415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.774 #27 NEW cov: 12441 ft: 14625 corp: 10/129b lim: 35 exec/s: 0 rss: 73Mb L: 10/25 MS: 1 ChangeBinInt- 00:14:52.774 [2024-11-05 16:38:57.187597] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.774 [2024-11-05 16:38:57.187623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.774 #29 NEW cov: 12441 ft: 14677 corp: 11/137b lim: 35 exec/s: 0 rss: 73Mb L: 8/25 MS: 2 CrossOver-CrossOver- 00:14:52.774 [2024-11-05 16:38:57.247960] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.774 [2024-11-05 16:38:57.247988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:52.774 [2024-11-05 16:38:57.248072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.774 [2024-11-05 16:38:57.248089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:52.774 #30 NEW cov: 12441 ft: 14736 corp: 12/151b lim: 35 exec/s: 0 rss: 74Mb L: 14/25 MS: 1 CrossOver- 00:14:52.774 [2024-11-05 16:38:57.308191] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000b0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.774 [2024-11-05 16:38:57.308217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:52.774 NEW_FUNC[1/2]: 0x137bef8 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1766 00:14:52.774 NEW_FUNC[2/2]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:52.774 #31 NEW cov: 12487 ft: 14799 corp: 13/171b lim: 35 exec/s: 0 rss: 74Mb L: 20/25 MS: 1 CrossOver- 00:14:52.774 [2024-11-05 16:38:57.358093] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:52.774 [2024-11-05 16:38:57.358120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.032 #36 NEW cov: 12487 ft: 14840 corp: 14/182b lim: 35 exec/s: 0 rss: 74Mb L: 11/25 MS: 5 ChangeByte-CopyPart-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:14:53.032 [2024-11-05 16:38:57.398225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.032 [2024-11-05 16:38:57.398254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.032 #37 NEW cov: 12487 ft: 14855 corp: 15/192b lim: 35 exec/s: 0 rss: 74Mb L: 10/25 MS: 1 PersAutoDict- DE: "\3779\237825\377\314"- 00:14:53.032 [2024-11-05 16:38:57.438343] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.032 [2024-11-05 16:38:57.438372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.032 #38 NEW cov: 12487 ft: 14923 corp: 16/200b lim: 35 exec/s: 38 rss: 74Mb L: 8/25 MS: 1 EraseBytes- 00:14:53.032 [2024-11-05 16:38:57.498493] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.032 [2024-11-05 16:38:57.498519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.032 #39 NEW cov: 12487 ft: 15004 corp: 17/210b lim: 35 exec/s: 39 rss: 74Mb L: 10/25 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:14:53.032 [2024-11-05 16:38:57.558732] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.032 [2024-11-05 16:38:57.558759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.032 #40 NEW cov: 12487 ft: 15045 corp: 18/221b lim: 35 exec/s: 40 rss: 74Mb L: 11/25 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:14:53.290 [2024-11-05 16:38:57.618801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.290 [2024-11-05 16:38:57.618830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.290 #41 NEW cov: 12487 ft: 15118 corp: 19/231b lim: 35 exec/s: 41 rss: 74Mb L: 10/25 MS: 1 ChangeBit- 00:14:53.290 [2024-11-05 16:38:57.658954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.290 [2024-11-05 16:38:57.658983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.290 #42 NEW cov: 12487 ft: 15143 corp: 20/241b lim: 35 exec/s: 42 rss: 74Mb L: 10/25 MS: 1 ChangeByte- 00:14:53.290 [2024-11-05 16:38:57.719126] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000038 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.290 [2024-11-05 16:38:57.719152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.290 #43 NEW cov: 12487 ft: 15163 corp: 21/250b lim: 35 exec/s: 43 rss: 74Mb L: 9/25 MS: 1 InsertByte- 00:14:53.290 [2024-11-05 16:38:57.779274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.290 [2024-11-05 16:38:57.779299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.290 #44 NEW cov: 12487 ft: 15182 corp: 22/261b lim: 35 exec/s: 44 rss: 74Mb L: 11/25 MS: 1 ChangeByte- 00:14:53.290 [2024-11-05 16:38:57.839424] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.290 [2024-11-05 16:38:57.839450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 #48 NEW cov: 12487 ft: 15196 corp: 23/268b lim: 35 exec/s: 48 rss: 74Mb L: 7/25 MS: 4 EraseBytes-ChangeByte-ChangeBinInt-CMP- DE: "\377\007"- 00:14:53.549 [2024-11-05 16:38:57.899943] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:57.899968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 [2024-11-05 16:38:57.900046] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:57.900061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.549 [2024-11-05 16:38:57.900120] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:57.900135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:53.549 #49 NEW cov: 12487 ft: 15212 corp: 24/293b lim: 35 exec/s: 49 rss: 74Mb L: 25/25 MS: 1 CMP- DE: "\001\000\000\000\000\000\003\377"- 00:14:53.549 [2024-11-05 16:38:57.939730] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:57.939754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 #50 NEW cov: 12487 ft: 15246 corp: 25/304b lim: 35 exec/s: 50 rss: 74Mb L: 11/25 MS: 1 ChangeByte- 00:14:53.549 [2024-11-05 16:38:57.979910] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:57.979937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 #51 NEW cov: 12487 ft: 15247 corp: 26/316b lim: 35 exec/s: 51 rss: 74Mb L: 12/25 MS: 1 CMP- DE: "\000\003"- 00:14:53.549 [2024-11-05 16:38:58.020141] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.020169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 [2024-11-05 16:38:58.020247] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.020263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.549 #52 NEW cov: 12487 ft: 15258 corp: 27/330b lim: 35 exec/s: 52 rss: 74Mb L: 14/25 MS: 1 CrossOver- 00:14:53.549 [2024-11-05 16:38:58.080341] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.080369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 [2024-11-05 16:38:58.080432] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.080449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.549 #53 NEW cov: 12487 ft: 15297 corp: 28/345b lim: 35 exec/s: 53 rss: 74Mb L: 15/25 MS: 1 InsertByte- 00:14:53.549 [2024-11-05 16:38:58.120673] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.120698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.549 [2024-11-05 16:38:58.120783] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.120799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.549 [2024-11-05 16:38:58.120872] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.549 [2024-11-05 16:38:58.120888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:53.808 #54 NEW cov: 12487 ft: 15308 corp: 29/370b lim: 35 exec/s: 54 rss: 74Mb L: 25/25 MS: 1 ChangeASCIIInt- 00:14:53.809 [2024-11-05 16:38:58.180665] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.180690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.180750] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.180766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.809 #55 NEW cov: 12487 ft: 15314 corp: 30/384b lim: 35 exec/s: 55 rss: 74Mb L: 14/25 MS: 1 ShuffleBytes- 00:14:53.809 [2024-11-05 16:38:58.221138] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.221164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.221242] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.221257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.221323] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.221337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.221397] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.221413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:53.809 #56 NEW cov: 12487 ft: 15613 corp: 31/413b lim: 35 exec/s: 56 rss: 74Mb L: 29/29 MS: 1 CMP- DE: "\377\377\377\377"- 00:14:53.809 [2024-11-05 16:38:58.261114] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.261139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.261201] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.261215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.261274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.261287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:53.809 #57 NEW cov: 12487 ft: 15623 corp: 32/438b lim: 35 exec/s: 57 rss: 74Mb L: 25/29 MS: 1 ChangeASCIIInt- 00:14:53.809 [2024-11-05 16:38:58.300828] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.300856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.809 #58 NEW cov: 12487 ft: 15635 corp: 33/448b lim: 35 exec/s: 58 rss: 75Mb L: 10/29 MS: 1 CMP- DE: "\000\000\000\003"- 00:14:53.809 [2024-11-05 16:38:58.361219] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.361246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:53.809 [2024-11-05 16:38:58.361320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES LBA RANGE TYPE cid:5 cdw10:00000003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:53.809 [2024-11-05 16:38:58.361335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:54.067 NEW_FUNC[1/1]: 0x46c498 in feat_lba_range_type /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:289 00:14:54.067 #59 NEW cov: 12498 ft: 15654 corp: 34/468b lim: 35 exec/s: 59 rss: 75Mb L: 20/29 MS: 1 InsertRepeatedBytes- 00:14:54.067 [2024-11-05 16:38:58.421545] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000044 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:54.067 [2024-11-05 16:38:58.421570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:14:54.067 [2024-11-05 16:38:58.421651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000039 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:54.067 [2024-11-05 16:38:58.421666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:54.067 [2024-11-05 16:38:58.421744] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:14:54.067 [2024-11-05 16:38:58.421762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:54.067 #60 NEW cov: 12498 ft: 15666 corp: 35/493b lim: 35 exec/s: 30 rss: 75Mb L: 25/29 MS: 1 CMP- DE: "\001:\2379?R\344\374"- 00:14:54.067 #60 DONE cov: 12498 ft: 15666 corp: 35/493b lim: 35 exec/s: 30 rss: 75Mb 00:14:54.067 ###### Recommended dictionary. ###### 00:14:54.067 "\3779\237825\377\314" # Uses: 2 00:14:54.067 "\377\377\377\377\377\377\377\377" # Uses: 1 00:14:54.067 "\377\007" # Uses: 0 00:14:54.067 "\001\000\000\000\000\000\003\377" # Uses: 0 00:14:54.067 "\000\003" # Uses: 0 00:14:54.067 "\377\377\377\377" # Uses: 0 00:14:54.067 "\000\000\000\003" # Uses: 0 00:14:54.067 "\001:\2379?R\344\374" # Uses: 0 00:14:54.067 ###### End of recommended dictionary. ###### 00:14:54.067 Done 60 runs in 2 second(s) 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:54.067 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:54.068 16:38:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:14:54.068 [2024-11-05 16:38:58.622378] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:54.068 [2024-11-05 16:38:58.622451] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525774 ] 00:14:54.326 [2024-11-05 16:38:58.890606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.584 [2024-11-05 16:38:58.938825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.584 [2024-11-05 16:38:59.002959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.584 [2024-11-05 16:38:59.019209] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:14:54.584 INFO: Running with entropic power schedule (0xFF, 100). 00:14:54.584 INFO: Seed: 3657576825 00:14:54.584 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:54.584 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:54.584 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:14:54.584 INFO: A corpus is not provided, starting from an empty corpus 00:14:54.584 #2 INITED exec/s: 0 rss: 66Mb 00:14:54.584 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:54.584 This may also happen if the target rejected all inputs we tried so far 00:14:54.584 [2024-11-05 16:38:59.092546] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:54.584 [2024-11-05 16:38:59.092598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:54.584 [2024-11-05 16:38:59.092723] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:54.584 [2024-11-05 16:38:59.092747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.150 NEW_FUNC[1/716]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:14:55.150 NEW_FUNC[2/716]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:14:55.150 #15 NEW cov: 12222 ft: 12221 corp: 2/22b lim: 35 exec/s: 0 rss: 73Mb L: 21/21 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:14:55.150 [2024-11-05 16:38:59.593738] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.150 [2024-11-05 16:38:59.593790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.150 [2024-11-05 16:38:59.593893] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.150 [2024-11-05 16:38:59.593914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.150 [2024-11-05 16:38:59.594018] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.150 [2024-11-05 16:38:59.594039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.150 #16 NEW cov: 12335 ft: 13359 corp: 3/52b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:14:55.150 [2024-11-05 16:38:59.693875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.150 [2024-11-05 16:38:59.693913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.150 [2024-11-05 16:38:59.694026] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.150 [2024-11-05 16:38:59.694047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.150 [2024-11-05 16:38:59.694154] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.150 [2024-11-05 16:38:59.694175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.407 #22 NEW cov: 12341 ft: 13573 corp: 4/82b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeByte- 00:14:55.408 [2024-11-05 16:38:59.794170] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.794211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.408 [2024-11-05 16:38:59.794319] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.794340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.408 [2024-11-05 16:38:59.794445] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.794467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.408 #23 NEW cov: 12426 ft: 13837 corp: 5/110b lim: 35 exec/s: 0 rss: 73Mb L: 28/30 MS: 1 CrossOver- 00:14:55.408 [2024-11-05 16:38:59.894216] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.894253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.408 [2024-11-05 16:38:59.894355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.894375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.408 #29 NEW cov: 12426 ft: 13976 corp: 6/131b lim: 35 exec/s: 0 rss: 73Mb L: 21/30 MS: 1 ChangeBinInt- 00:14:55.408 [2024-11-05 16:38:59.964848] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.964883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.408 [2024-11-05 16:38:59.964991] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.965014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.408 [2024-11-05 16:38:59.965123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.408 [2024-11-05 16:38:59.965145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.665 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:55.665 #30 NEW cov: 12449 ft: 14059 corp: 7/161b lim: 35 exec/s: 0 rss: 73Mb L: 30/30 MS: 1 ChangeBit- 00:14:55.665 [2024-11-05 16:39:00.035281] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.035320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.665 [2024-11-05 16:39:00.035423] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.035446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.665 [2024-11-05 16:39:00.035527] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000002ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.035548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.665 #31 NEW cov: 12449 ft: 14183 corp: 8/194b lim: 35 exec/s: 31 rss: 73Mb L: 33/33 MS: 1 CrossOver- 00:14:55.665 [2024-11-05 16:39:00.105516] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.105556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.665 [2024-11-05 16:39:00.105666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.105687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.665 [2024-11-05 16:39:00.105801] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.105822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.665 #32 NEW cov: 12449 ft: 14231 corp: 9/224b lim: 35 exec/s: 32 rss: 73Mb L: 30/33 MS: 1 ChangeBit- 00:14:55.665 [2024-11-05 16:39:00.175843] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.175881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.665 [2024-11-05 16:39:00.175994] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.176015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.665 [2024-11-05 16:39:00.176127] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.665 [2024-11-05 16:39:00.176151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.665 #33 NEW cov: 12449 ft: 14277 corp: 10/254b lim: 35 exec/s: 33 rss: 73Mb L: 30/33 MS: 1 ChangeBit- 00:14:55.923 [2024-11-05 16:39:00.276094] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.276136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.923 [2024-11-05 16:39:00.276250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.276273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:55.923 [2024-11-05 16:39:00.276392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.276414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.923 #34 NEW cov: 12449 ft: 14321 corp: 11/284b lim: 35 exec/s: 34 rss: 74Mb L: 30/33 MS: 1 ChangeBinInt- 00:14:55.923 [2024-11-05 16:39:00.376576] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.376617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.923 [2024-11-05 16:39:00.376847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.376871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:55.923 #35 NEW cov: 12449 ft: 14418 corp: 12/315b lim: 35 exec/s: 35 rss: 74Mb L: 31/33 MS: 1 CrossOver- 00:14:55.923 [2024-11-05 16:39:00.476829] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.476871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:55.923 [2024-11-05 16:39:00.477095] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.923 [2024-11-05 16:39:00.477118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:56.181 #36 NEW cov: 12449 ft: 14421 corp: 13/346b lim: 35 exec/s: 36 rss: 74Mb L: 31/33 MS: 1 CrossOver- 00:14:56.181 [2024-11-05 16:39:00.577266] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.181 [2024-11-05 16:39:00.577305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:56.181 [2024-11-05 16:39:00.577411] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.181 [2024-11-05 16:39:00.577437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:56.181 [2024-11-05 16:39:00.577543] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.181 [2024-11-05 16:39:00.577566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:56.181 #37 NEW cov: 12449 ft: 14453 corp: 14/376b lim: 35 exec/s: 37 rss: 74Mb L: 30/33 MS: 1 ChangeBinInt- 00:14:56.181 [2024-11-05 16:39:00.637225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.181 [2024-11-05 16:39:00.637263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:56.181 #38 NEW cov: 12449 ft: 14618 corp: 15/403b lim: 35 exec/s: 38 rss: 74Mb L: 27/33 MS: 1 EraseBytes- 00:14:56.439 #39 NEW cov: 12449 ft: 14876 corp: 16/415b lim: 35 exec/s: 39 rss: 74Mb L: 12/33 MS: 1 CrossOver- 00:14:56.439 [2024-11-05 16:39:00.818180] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.439 [2024-11-05 16:39:00.818221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:56.439 [2024-11-05 16:39:00.818287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.439 [2024-11-05 16:39:00.818309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:56.439 [2024-11-05 16:39:00.818420] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.439 [2024-11-05 16:39:00.818440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:56.439 #40 NEW cov: 12449 ft: 14878 corp: 17/446b lim: 35 exec/s: 40 rss: 74Mb L: 31/33 MS: 1 InsertByte- 00:14:56.439 [2024-11-05 16:39:00.918475] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.440 [2024-11-05 16:39:00.918514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:56.440 [2024-11-05 16:39:00.918613] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.440 [2024-11-05 16:39:00.918635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:56.440 [2024-11-05 16:39:00.918753] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.440 [2024-11-05 16:39:00.918774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:56.440 #41 NEW cov: 12449 ft: 14908 corp: 18/476b lim: 35 exec/s: 41 rss: 74Mb L: 30/33 MS: 1 ChangeBit- 00:14:56.440 [2024-11-05 16:39:00.988838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.440 [2024-11-05 16:39:00.988877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:56.440 [2024-11-05 16:39:00.988986] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.440 [2024-11-05 16:39:00.989008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:56.440 [2024-11-05 16:39:00.989103] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.440 [2024-11-05 16:39:00.989129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:56.697 #42 NEW cov: 12449 ft: 14937 corp: 19/506b lim: 35 exec/s: 42 rss: 74Mb L: 30/33 MS: 1 ChangeByte- 00:14:56.698 [2024-11-05 16:39:01.089288] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000000ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.698 [2024-11-05 16:39:01.089326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:14:56.698 [2024-11-05 16:39:01.089427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.698 [2024-11-05 16:39:01.089449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:14:56.698 [2024-11-05 16:39:01.089558] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:56.698 [2024-11-05 16:39:01.089579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:14:56.698 #43 NEW cov: 12449 ft: 14967 corp: 20/537b lim: 35 exec/s: 21 rss: 74Mb L: 31/33 MS: 1 ShuffleBytes- 00:14:56.698 #43 DONE cov: 12449 ft: 14967 corp: 20/537b lim: 35 exec/s: 21 rss: 74Mb 00:14:56.698 Done 43 runs in 2 second(s) 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:56.698 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:56.956 16:39:01 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:14:56.956 [2024-11-05 16:39:01.326190] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:56.956 [2024-11-05 16:39:01.326265] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526219 ] 00:14:57.214 [2024-11-05 16:39:01.649106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.214 [2024-11-05 16:39:01.706906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.214 [2024-11-05 16:39:01.770959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.214 [2024-11-05 16:39:01.787210] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:14:57.473 INFO: Running with entropic power schedule (0xFF, 100). 00:14:57.473 INFO: Seed: 2130611722 00:14:57.473 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:14:57.473 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:14:57.473 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:14:57.473 INFO: A corpus is not provided, starting from an empty corpus 00:14:57.473 #2 INITED exec/s: 0 rss: 66Mb 00:14:57.473 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:14:57.473 This may also happen if the target rejected all inputs we tried so far 00:14:57.473 [2024-11-05 16:39:01.846928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:17940362863843014904 len:63737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.473 [2024-11-05 16:39:01.846973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:57.473 [2024-11-05 16:39:01.847014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:17940362863843014904 len:63737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.473 [2024-11-05 16:39:01.847037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:57.473 [2024-11-05 16:39:01.847104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17940362863843014904 len:63737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.473 [2024-11-05 16:39:01.847128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:57.473 [2024-11-05 16:39:01.847195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:17940362863843014904 len:63737 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.473 [2024-11-05 16:39:01.847217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:57.730 NEW_FUNC[1/716]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:14:57.989 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:14:57.989 #17 NEW cov: 12312 ft: 12306 corp: 2/100b lim: 105 exec/s: 0 rss: 73Mb L: 99/99 MS: 5 ShuffleBytes-ShuffleBytes-ChangeByte-ChangeByte-InsertRepeatedBytes- 00:14:57.989 [2024-11-05 16:39:02.338214] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.338285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.338370] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.338399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.338480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.338507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:57.989 #19 NEW cov: 12425 ft: 13390 corp: 3/168b lim: 105 exec/s: 0 rss: 73Mb L: 68/99 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:14:57.989 [2024-11-05 16:39:02.398060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.398099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.398152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.398173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.398241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.398263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:57.989 #20 NEW cov: 12431 ft: 13690 corp: 4/236b lim: 105 exec/s: 0 rss: 73Mb L: 68/99 MS: 1 ChangeBinInt- 00:14:57.989 [2024-11-05 16:39:02.478267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.478305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.478365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.478386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.478455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.478477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:57.989 #21 NEW cov: 12516 ft: 13853 corp: 5/304b lim: 105 exec/s: 0 rss: 73Mb L: 68/99 MS: 1 ChangeBit- 00:14:57.989 [2024-11-05 16:39:02.558472] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.558509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.558571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.558593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:57.989 [2024-11-05 16:39:02.558662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:57.989 [2024-11-05 16:39:02.558683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.247 #22 NEW cov: 12516 ft: 13970 corp: 6/374b lim: 105 exec/s: 0 rss: 73Mb L: 70/99 MS: 1 CopyPart- 00:14:58.247 [2024-11-05 16:39:02.638670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.247 [2024-11-05 16:39:02.638706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.247 [2024-11-05 16:39:02.638788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:292057777920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.247 [2024-11-05 16:39:02.638811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.247 [2024-11-05 16:39:02.638879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.247 [2024-11-05 16:39:02.638905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.247 #23 NEW cov: 12516 ft: 14011 corp: 7/442b lim: 105 exec/s: 0 rss: 73Mb L: 68/99 MS: 1 ChangeBinInt- 00:14:58.247 [2024-11-05 16:39:02.688849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.247 [2024-11-05 16:39:02.688886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.247 [2024-11-05 16:39:02.688948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.247 [2024-11-05 16:39:02.688971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.689040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17870283321406191864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.689063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.248 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:14:58.248 #24 NEW cov: 12539 ft: 14074 corp: 8/513b lim: 105 exec/s: 0 rss: 73Mb L: 71/99 MS: 1 CrossOver- 00:14:58.248 [2024-11-05 16:39:02.739100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.739138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.739200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.739223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.739288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.739310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.739376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.739398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:58.248 #25 NEW cov: 12539 ft: 14083 corp: 9/612b lim: 105 exec/s: 0 rss: 73Mb L: 99/99 MS: 1 InsertRepeatedBytes- 00:14:58.248 [2024-11-05 16:39:02.789267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.789306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.789373] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.789396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.789461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.789481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.248 [2024-11-05 16:39:02.789547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.248 [2024-11-05 16:39:02.789572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:58.506 #26 NEW cov: 12539 ft: 14129 corp: 10/711b lim: 105 exec/s: 26 rss: 73Mb L: 99/99 MS: 1 ShuffleBytes- 00:14:58.506 [2024-11-05 16:39:02.869452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.869489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:02.869559] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.869581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:02.869650] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.869670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:02.869743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.869765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:58.506 #27 NEW cov: 12539 ft: 14166 corp: 11/810b lim: 105 exec/s: 27 rss: 73Mb L: 99/99 MS: 1 ChangeByte- 00:14:58.506 [2024-11-05 16:39:02.949702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.949749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:02.949809] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:72057589742962432 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.949831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:02.949898] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.949930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:02.950019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:9007199254740992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:02.950047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:58.506 #28 NEW cov: 12539 ft: 14272 corp: 12/894b lim: 105 exec/s: 28 rss: 74Mb L: 84/99 MS: 1 InsertRepeatedBytes- 00:14:58.506 [2024-11-05 16:39:03.029768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:03.029806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:03.029852] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:292057777920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:03.029874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.506 [2024-11-05 16:39:03.029944] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.506 [2024-11-05 16:39:03.029967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.765 #29 NEW cov: 12539 ft: 14322 corp: 13/962b lim: 105 exec/s: 29 rss: 74Mb L: 68/99 MS: 1 CopyPart- 00:14:58.765 [2024-11-05 16:39:03.109975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.110012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.110076] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.110099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.110168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:117440512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.110190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.765 #30 NEW cov: 12539 ft: 14366 corp: 14/1032b lim: 105 exec/s: 30 rss: 74Mb L: 70/99 MS: 1 CopyPart- 00:14:58.765 [2024-11-05 16:39:03.170327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:72056494694072320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.170366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.170423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.170445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.170515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.170536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.170608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.170632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:58.765 #31 NEW cov: 12539 ft: 14397 corp: 15/1131b lim: 105 exec/s: 31 rss: 74Mb L: 99/99 MS: 1 ChangeBinInt- 00:14:58.765 [2024-11-05 16:39:03.220309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.220347] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.220404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:292057777920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.220426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.220493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.220516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:58.765 #32 NEW cov: 12539 ft: 14489 corp: 16/1199b lim: 105 exec/s: 32 rss: 74Mb L: 68/99 MS: 1 CopyPart- 00:14:58.765 [2024-11-05 16:39:03.300585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.300624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.300676] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.300699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:58.765 [2024-11-05 16:39:03.300769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17870283321406191864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:58.765 [2024-11-05 16:39:03.300792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.023 #33 NEW cov: 12539 ft: 14507 corp: 17/1270b lim: 105 exec/s: 33 rss: 74Mb L: 71/99 MS: 1 ShuffleBytes- 00:14:59.023 [2024-11-05 16:39:03.380978] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.381020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.023 [2024-11-05 16:39:03.381082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:72057589742962432 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.381104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.023 [2024-11-05 16:39:03.381171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.381193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.023 [2024-11-05 16:39:03.381258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:20547673299877888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.381278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:14:59.023 #34 NEW cov: 12539 ft: 14524 corp: 18/1354b lim: 105 exec/s: 34 rss: 74Mb L: 84/99 MS: 1 ChangeByte- 00:14:59.023 [2024-11-05 16:39:03.461003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.461044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.023 [2024-11-05 16:39:03.461092] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.461114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.023 [2024-11-05 16:39:03.461181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17870283321406191864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.461204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.023 #35 NEW cov: 12539 ft: 14582 corp: 19/1425b lim: 105 exec/s: 35 rss: 74Mb L: 71/99 MS: 1 ChangeBinInt- 00:14:59.023 [2024-11-05 16:39:03.541203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.023 [2024-11-05 16:39:03.541240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.024 [2024-11-05 16:39:03.541298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.024 [2024-11-05 16:39:03.541321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.024 [2024-11-05 16:39:03.541393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.024 [2024-11-05 16:39:03.541415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.024 #36 NEW cov: 12539 ft: 14593 corp: 20/1493b lim: 105 exec/s: 36 rss: 74Mb L: 68/99 MS: 1 ShuffleBytes- 00:14:59.024 [2024-11-05 16:39:03.591327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.024 [2024-11-05 16:39:03.591363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.024 [2024-11-05 16:39:03.591425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.024 [2024-11-05 16:39:03.591448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.024 [2024-11-05 16:39:03.591516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.024 [2024-11-05 16:39:03.591538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.282 #37 NEW cov: 12539 ft: 14611 corp: 21/1561b lim: 105 exec/s: 37 rss: 74Mb L: 68/99 MS: 1 ShuffleBytes- 00:14:59.282 [2024-11-05 16:39:03.641465] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.641502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.641564] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1970324836976384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.641587] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.641654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.641676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.282 #38 NEW cov: 12539 ft: 14636 corp: 22/1631b lim: 105 exec/s: 38 rss: 74Mb L: 70/99 MS: 1 ChangeBinInt- 00:14:59.282 [2024-11-05 16:39:03.691608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.691644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.691706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.691736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.691802] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:603979776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.691824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.282 #39 NEW cov: 12539 ft: 14737 corp: 23/1699b lim: 105 exec/s: 39 rss: 74Mb L: 68/99 MS: 1 ChangeByte- 00:14:59.282 [2024-11-05 16:39:03.741777] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.741813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.741877] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.741903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.741970] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.741990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.282 #40 NEW cov: 12539 ft: 14778 corp: 24/1767b lim: 105 exec/s: 40 rss: 74Mb L: 68/99 MS: 1 ChangeBinInt- 00:14:59.282 [2024-11-05 16:39:03.822066] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.822102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.822163] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.822186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:14:59.282 [2024-11-05 16:39:03.822254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:17870283321473300728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:59.282 [2024-11-05 16:39:03.822285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:14:59.282 #41 NEW cov: 12539 ft: 14806 corp: 25/1838b lim: 105 exec/s: 20 rss: 74Mb L: 71/99 MS: 1 ChangeBit- 00:14:59.282 #41 DONE cov: 12539 ft: 14806 corp: 25/1838b lim: 105 exec/s: 20 rss: 74Mb 00:14:59.282 Done 41 runs in 2 second(s) 00:14:59.540 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:14:59.541 16:39:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:14:59.541 16:39:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:14:59.541 16:39:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:14:59.541 16:39:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:14:59.541 [2024-11-05 16:39:04.034773] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:14:59.541 [2024-11-05 16:39:04.034854] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3526614 ] 00:14:59.799 [2024-11-05 16:39:04.357702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.057 [2024-11-05 16:39:04.416164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.057 [2024-11-05 16:39:04.480376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.057 [2024-11-05 16:39:04.496626] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:15:00.057 INFO: Running with entropic power schedule (0xFF, 100). 00:15:00.057 INFO: Seed: 546649691 00:15:00.057 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:00.057 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:00.057 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:15:00.057 INFO: A corpus is not provided, starting from an empty corpus 00:15:00.057 #2 INITED exec/s: 0 rss: 65Mb 00:15:00.057 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:00.057 This may also happen if the target rejected all inputs we tried so far 00:15:00.057 [2024-11-05 16:39:04.542512] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.057 [2024-11-05 16:39:04.542547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.057 [2024-11-05 16:39:04.542614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.057 [2024-11-05 16:39:04.542629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.057 [2024-11-05 16:39:04.542684] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.057 [2024-11-05 16:39:04.542701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:00.624 NEW_FUNC[1/717]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:15:00.624 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:00.624 #5 NEW cov: 12328 ft: 12328 corp: 2/84b lim: 120 exec/s: 0 rss: 72Mb L: 83/83 MS: 3 ShuffleBytes-InsertByte-InsertRepeatedBytes- 00:15:00.624 [2024-11-05 16:39:05.015661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.015723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.015801] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.015826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.015928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.015952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:00.624 #11 NEW cov: 12446 ft: 13014 corp: 3/170b lim: 120 exec/s: 0 rss: 72Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:15:00.624 [2024-11-05 16:39:05.116364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.116411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.116478] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.116502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.116589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.116612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.116704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.116730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:00.624 #17 NEW cov: 12452 ft: 13508 corp: 4/285b lim: 120 exec/s: 0 rss: 72Mb L: 115/115 MS: 1 CrossOver- 00:15:00.624 [2024-11-05 16:39:05.186100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.186146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.186248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.186272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.624 [2024-11-05 16:39:05.186374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.624 [2024-11-05 16:39:05.186398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:00.883 #23 NEW cov: 12537 ft: 13753 corp: 5/368b lim: 120 exec/s: 0 rss: 72Mb L: 83/115 MS: 1 CopyPart- 00:15:00.883 [2024-11-05 16:39:05.246420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:1997537280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.246461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.246534] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.246558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.246664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.246690] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:00.883 #24 NEW cov: 12537 ft: 13820 corp: 6/451b lim: 120 exec/s: 0 rss: 72Mb L: 83/115 MS: 1 ChangeByte- 00:15:00.883 [2024-11-05 16:39:05.337067] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386614638779034 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.337107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.337180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.337209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.337307] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.337330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.337431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.337456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:00.883 #31 NEW cov: 12537 ft: 13939 corp: 7/558b lim: 120 exec/s: 0 rss: 72Mb L: 107/115 MS: 2 CMP-InsertRepeatedBytes- DE: "\017\000\000\000\000\000\000\000"- 00:15:00.883 [2024-11-05 16:39:05.406988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:139436490752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.407028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.407098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.407122] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:00.883 [2024-11-05 16:39:05.407213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:00.883 [2024-11-05 16:39:05.407237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.142 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:01.142 #32 NEW cov: 12560 ft: 14108 corp: 8/641b lim: 120 exec/s: 0 rss: 73Mb L: 83/115 MS: 1 ChangeBit- 00:15:01.142 [2024-11-05 16:39:05.507457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.507497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.507565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287258491824682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.507590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.507671] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.507697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.142 #33 NEW cov: 12560 ft: 14134 corp: 9/730b lim: 120 exec/s: 33 rss: 73Mb L: 89/115 MS: 1 InsertRepeatedBytes- 00:15:01.142 [2024-11-05 16:39:05.567605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:139436490752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.567646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.567726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.567754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.567856] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.567886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.142 #34 NEW cov: 12560 ft: 14175 corp: 10/813b lim: 120 exec/s: 34 rss: 73Mb L: 83/115 MS: 1 ChangeBinInt- 00:15:01.142 [2024-11-05 16:39:05.658417] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:139436490752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.658455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.658529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.658553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.658618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.658641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.142 [2024-11-05 16:39:05.658747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.142 [2024-11-05 16:39:05.658773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.142 #35 NEW cov: 12560 ft: 14211 corp: 11/912b lim: 120 exec/s: 35 rss: 73Mb L: 99/115 MS: 1 CrossOver- 00:15:01.401 [2024-11-05 16:39:05.728342] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:139436490752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.728385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.728455] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.728479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.728581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.728605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.401 #36 NEW cov: 12560 ft: 14291 corp: 12/995b lim: 120 exec/s: 36 rss: 73Mb L: 83/115 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:15:01.401 [2024-11-05 16:39:05.818961] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386614638779034 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.819002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.819075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.819097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.819187] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.819212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.819317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.819345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.401 #37 NEW cov: 12560 ft: 14294 corp: 13/1102b lim: 120 exec/s: 37 rss: 73Mb L: 107/115 MS: 1 ChangeBit- 00:15:01.401 [2024-11-05 16:39:05.919394] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386614638779034 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.919438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.919516] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.919539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.919631] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.919655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.401 [2024-11-05 16:39:05.919745] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.401 [2024-11-05 16:39:05.919771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.401 #38 NEW cov: 12560 ft: 14352 corp: 14/1217b lim: 120 exec/s: 38 rss: 73Mb L: 115/115 MS: 1 PersAutoDict- DE: "\017\000\000\000\000\000\000\000"- 00:15:01.660 [2024-11-05 16:39:05.989604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386614638779034 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:05.989658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:05.989762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:05.989788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:05.989886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:05.989914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:05.990023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:05.990049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.660 #39 NEW cov: 12560 ft: 14395 corp: 15/1330b lim: 120 exec/s: 39 rss: 73Mb L: 113/115 MS: 1 CopyPart- 00:15:01.660 [2024-11-05 16:39:06.089732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:139436490752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.089774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:06.089873] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.089893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:06.090000] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.090027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.660 #45 NEW cov: 12560 ft: 14421 corp: 16/1413b lim: 120 exec/s: 45 rss: 73Mb L: 83/115 MS: 1 ChangeBinInt- 00:15:01.660 [2024-11-05 16:39:06.150135] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386614638779034 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.150174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:06.150242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.150268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:06.150331] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.150353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.660 [2024-11-05 16:39:06.150450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.660 [2024-11-05 16:39:06.150476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.660 #46 NEW cov: 12560 ft: 14439 corp: 17/1528b lim: 120 exec/s: 46 rss: 73Mb L: 115/115 MS: 1 ShuffleBytes- 00:15:01.919 [2024-11-05 16:39:06.250580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11140386614638779034 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.250623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.250687] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11140386617063807642 len:39579 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.250709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.250787] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.250810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.250913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:707406378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.250939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.919 #47 NEW cov: 12560 ft: 14467 corp: 18/1628b lim: 120 exec/s: 47 rss: 73Mb L: 100/115 MS: 1 CrossOver- 00:15:01.919 [2024-11-05 16:39:06.351023] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.351062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.351134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.351158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.351227] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.351253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.351355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.351379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:01.919 #48 NEW cov: 12560 ft: 14488 corp: 19/1743b lim: 120 exec/s: 48 rss: 73Mb L: 115/115 MS: 1 ChangeByte- 00:15:01.919 [2024-11-05 16:39:06.450841] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.450880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.450950] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287258491824682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.450973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:01.919 [2024-11-05 16:39:06.451034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:01.919 [2024-11-05 16:39:06.451058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:02.177 #49 NEW cov: 12560 ft: 14508 corp: 20/1832b lim: 120 exec/s: 49 rss: 73Mb L: 89/115 MS: 1 ChangeBinInt- 00:15:02.177 [2024-11-05 16:39:06.540871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:02.177 [2024-11-05 16:39:06.540912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:02.177 [2024-11-05 16:39:06.541022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:3038287258491824682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:02.177 [2024-11-05 16:39:06.541048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:02.177 #50 NEW cov: 12560 ft: 14906 corp: 21/1899b lim: 120 exec/s: 25 rss: 73Mb L: 67/115 MS: 1 CrossOver- 00:15:02.177 #50 DONE cov: 12560 ft: 14906 corp: 21/1899b lim: 120 exec/s: 25 rss: 73Mb 00:15:02.177 ###### Recommended dictionary. ###### 00:15:02.177 "\017\000\000\000\000\000\000\000" # Uses: 3 00:15:02.177 ###### End of recommended dictionary. ###### 00:15:02.177 Done 50 runs in 2 second(s) 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:02.177 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:02.178 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:02.178 16:39:06 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:15:02.178 [2024-11-05 16:39:06.749517] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:02.178 [2024-11-05 16:39:06.749598] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527406 ] 00:15:02.744 [2024-11-05 16:39:07.087164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.744 [2024-11-05 16:39:07.144689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.744 [2024-11-05 16:39:07.208657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.744 [2024-11-05 16:39:07.224898] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:15:02.744 INFO: Running with entropic power schedule (0xFF, 100). 00:15:02.744 INFO: Seed: 3273631894 00:15:02.744 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:02.744 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:02.744 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:15:02.744 INFO: A corpus is not provided, starting from an empty corpus 00:15:02.744 #2 INITED exec/s: 0 rss: 66Mb 00:15:02.744 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:02.744 This may also happen if the target rejected all inputs we tried so far 00:15:02.744 [2024-11-05 16:39:07.274532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:02.744 [2024-11-05 16:39:07.274563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:02.744 [2024-11-05 16:39:07.274620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:02.744 [2024-11-05 16:39:07.274635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.002 NEW_FUNC[1/715]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:15:03.002 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:03.002 #33 NEW cov: 12276 ft: 12273 corp: 2/42b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:15:03.261 [2024-11-05 16:39:07.595262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.261 [2024-11-05 16:39:07.595303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.261 [2024-11-05 16:39:07.595336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.261 [2024-11-05 16:39:07.595351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.261 #34 NEW cov: 12389 ft: 12832 corp: 3/83b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 ChangeBit- 00:15:03.261 [2024-11-05 16:39:07.655301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.261 [2024-11-05 16:39:07.655333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.261 [2024-11-05 16:39:07.655382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.261 [2024-11-05 16:39:07.655397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.261 #40 NEW cov: 12395 ft: 13084 corp: 4/125b lim: 100 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 CrossOver- 00:15:03.261 [2024-11-05 16:39:07.695407] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.261 [2024-11-05 16:39:07.695434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.261 [2024-11-05 16:39:07.695489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.261 [2024-11-05 16:39:07.695501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.261 #41 NEW cov: 12480 ft: 13351 corp: 5/167b lim: 100 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 InsertByte- 00:15:03.262 [2024-11-05 16:39:07.755609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.262 [2024-11-05 16:39:07.755635] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.262 [2024-11-05 16:39:07.755690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.262 [2024-11-05 16:39:07.755706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.262 #47 NEW cov: 12480 ft: 13482 corp: 6/209b lim: 100 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 InsertByte- 00:15:03.262 [2024-11-05 16:39:07.795676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.262 [2024-11-05 16:39:07.795703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.262 [2024-11-05 16:39:07.795765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.262 [2024-11-05 16:39:07.795779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.262 #48 NEW cov: 12480 ft: 13556 corp: 7/250b lim: 100 exec/s: 0 rss: 73Mb L: 41/42 MS: 1 ChangeByte- 00:15:03.262 [2024-11-05 16:39:07.835846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.262 [2024-11-05 16:39:07.835873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.262 [2024-11-05 16:39:07.835931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.262 [2024-11-05 16:39:07.835944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.520 #49 NEW cov: 12480 ft: 13601 corp: 8/293b lim: 100 exec/s: 0 rss: 73Mb L: 43/43 MS: 1 InsertByte- 00:15:03.520 [2024-11-05 16:39:07.895970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.520 [2024-11-05 16:39:07.895997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.520 [2024-11-05 16:39:07.896052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.520 [2024-11-05 16:39:07.896065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.520 #50 NEW cov: 12480 ft: 13678 corp: 9/335b lim: 100 exec/s: 0 rss: 73Mb L: 42/43 MS: 1 CrossOver- 00:15:03.520 [2024-11-05 16:39:07.956173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.520 [2024-11-05 16:39:07.956208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.520 [2024-11-05 16:39:07.956251] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.520 [2024-11-05 16:39:07.956266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.520 #51 NEW cov: 12480 ft: 13729 corp: 10/381b lim: 100 exec/s: 0 rss: 73Mb L: 46/46 MS: 1 CMP- DE: "\001\000\000p"- 00:15:03.520 [2024-11-05 16:39:07.996254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.520 [2024-11-05 16:39:07.996282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.520 [2024-11-05 16:39:07.996339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.520 [2024-11-05 16:39:07.996351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.520 #52 NEW cov: 12480 ft: 13773 corp: 11/425b lim: 100 exec/s: 0 rss: 73Mb L: 44/46 MS: 1 CMP- DE: "\001\000"- 00:15:03.520 [2024-11-05 16:39:08.056455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.520 [2024-11-05 16:39:08.056483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.520 [2024-11-05 16:39:08.056536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.520 [2024-11-05 16:39:08.056548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.520 #53 NEW cov: 12480 ft: 13791 corp: 12/466b lim: 100 exec/s: 0 rss: 73Mb L: 41/46 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:15:03.779 [2024-11-05 16:39:08.116604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.779 [2024-11-05 16:39:08.116632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.779 [2024-11-05 16:39:08.116688] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.779 [2024-11-05 16:39:08.116704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.779 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:03.779 #54 NEW cov: 12503 ft: 13805 corp: 13/512b lim: 100 exec/s: 0 rss: 74Mb L: 46/46 MS: 1 ShuffleBytes- 00:15:03.779 [2024-11-05 16:39:08.177037] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.779 [2024-11-05 16:39:08.177066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.779 [2024-11-05 16:39:08.177113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.779 [2024-11-05 16:39:08.177127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.779 [2024-11-05 16:39:08.177172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:15:03.779 [2024-11-05 16:39:08.177187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:03.779 [2024-11-05 16:39:08.177253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:15:03.779 [2024-11-05 16:39:08.177275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:03.779 #55 NEW cov: 12503 ft: 14165 corp: 14/592b lim: 100 exec/s: 0 rss: 74Mb L: 80/80 MS: 1 CrossOver- 00:15:03.779 [2024-11-05 16:39:08.216888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.779 [2024-11-05 16:39:08.216918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.779 [2024-11-05 16:39:08.216974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.779 [2024-11-05 16:39:08.216989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.779 #56 NEW cov: 12503 ft: 14182 corp: 15/636b lim: 100 exec/s: 56 rss: 74Mb L: 44/80 MS: 1 ChangeByte- 00:15:03.779 [2024-11-05 16:39:08.277070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.779 [2024-11-05 16:39:08.277099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.779 [2024-11-05 16:39:08.277154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:03.779 [2024-11-05 16:39:08.277169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:03.779 #57 NEW cov: 12503 ft: 14197 corp: 16/677b lim: 100 exec/s: 57 rss: 74Mb L: 41/80 MS: 1 CopyPart- 00:15:03.779 [2024-11-05 16:39:08.317031] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:03.779 [2024-11-05 16:39:08.317058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:03.779 #58 NEW cov: 12503 ft: 14572 corp: 17/715b lim: 100 exec/s: 58 rss: 74Mb L: 38/80 MS: 1 EraseBytes- 00:15:04.087 [2024-11-05 16:39:08.377370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.377398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.377454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.087 [2024-11-05 16:39:08.377467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.087 #59 NEW cov: 12503 ft: 14617 corp: 18/766b lim: 100 exec/s: 59 rss: 74Mb L: 51/80 MS: 1 CopyPart- 00:15:04.087 [2024-11-05 16:39:08.417422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.417450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.417504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.087 [2024-11-05 16:39:08.417519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.087 #60 NEW cov: 12503 ft: 14629 corp: 19/809b lim: 100 exec/s: 60 rss: 74Mb L: 43/80 MS: 1 InsertByte- 00:15:04.087 [2024-11-05 16:39:08.457548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.457576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.457631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.087 [2024-11-05 16:39:08.457644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.087 #61 NEW cov: 12503 ft: 14637 corp: 20/851b lim: 100 exec/s: 61 rss: 74Mb L: 42/80 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:15:04.087 [2024-11-05 16:39:08.497844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.497872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.497926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.087 [2024-11-05 16:39:08.497942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.497992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:15:04.087 [2024-11-05 16:39:08.498008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:04.087 #62 NEW cov: 12503 ft: 14867 corp: 21/916b lim: 100 exec/s: 62 rss: 74Mb L: 65/80 MS: 1 CopyPart- 00:15:04.087 [2024-11-05 16:39:08.537821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.537848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.537904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.087 [2024-11-05 16:39:08.537916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.087 #63 NEW cov: 12503 ft: 14875 corp: 22/959b lim: 100 exec/s: 63 rss: 74Mb L: 43/80 MS: 1 InsertByte- 00:15:04.087 [2024-11-05 16:39:08.577801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.577829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 #64 NEW cov: 12503 ft: 14889 corp: 23/997b lim: 100 exec/s: 64 rss: 74Mb L: 38/80 MS: 1 ChangeByte- 00:15:04.087 [2024-11-05 16:39:08.638176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.087 [2024-11-05 16:39:08.638209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.087 [2024-11-05 16:39:08.638261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.087 [2024-11-05 16:39:08.638276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.370 #65 NEW cov: 12503 ft: 14900 corp: 24/1040b lim: 100 exec/s: 65 rss: 74Mb L: 43/80 MS: 1 CopyPart- 00:15:04.370 [2024-11-05 16:39:08.698300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.370 [2024-11-05 16:39:08.698329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.370 [2024-11-05 16:39:08.698385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.370 [2024-11-05 16:39:08.698399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.370 #66 NEW cov: 12503 ft: 14923 corp: 25/1081b lim: 100 exec/s: 66 rss: 74Mb L: 41/80 MS: 1 ChangeBit- 00:15:04.370 [2024-11-05 16:39:08.758556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.370 [2024-11-05 16:39:08.758585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.370 [2024-11-05 16:39:08.758636] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.370 [2024-11-05 16:39:08.758649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.370 [2024-11-05 16:39:08.758700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:15:04.370 [2024-11-05 16:39:08.758729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:04.370 #67 NEW cov: 12503 ft: 14931 corp: 26/1149b lim: 100 exec/s: 67 rss: 74Mb L: 68/80 MS: 1 CrossOver- 00:15:04.370 [2024-11-05 16:39:08.798410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.370 [2024-11-05 16:39:08.798440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.370 #68 NEW cov: 12503 ft: 14939 corp: 27/1187b lim: 100 exec/s: 68 rss: 74Mb L: 38/80 MS: 1 ChangeBinInt- 00:15:04.370 [2024-11-05 16:39:08.858711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.370 [2024-11-05 16:39:08.858743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.370 [2024-11-05 16:39:08.858792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.370 [2024-11-05 16:39:08.858807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.370 #69 NEW cov: 12503 ft: 14940 corp: 28/1229b lim: 100 exec/s: 69 rss: 74Mb L: 42/80 MS: 1 EraseBytes- 00:15:04.370 [2024-11-05 16:39:08.918899] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.370 [2024-11-05 16:39:08.918926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.370 [2024-11-05 16:39:08.918982] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.370 [2024-11-05 16:39:08.918995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.370 #70 NEW cov: 12503 ft: 14944 corp: 29/1272b lim: 100 exec/s: 70 rss: 74Mb L: 43/80 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:15:04.646 [2024-11-05 16:39:08.959284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.646 [2024-11-05 16:39:08.959312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:08.959361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.646 [2024-11-05 16:39:08.959377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:08.959431] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:15:04.646 [2024-11-05 16:39:08.959446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:08.959500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:15:04.646 [2024-11-05 16:39:08.959515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:04.646 #71 NEW cov: 12503 ft: 14955 corp: 30/1353b lim: 100 exec/s: 71 rss: 74Mb L: 81/81 MS: 1 CopyPart- 00:15:04.646 [2024-11-05 16:39:09.019193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.646 [2024-11-05 16:39:09.019219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.019275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.646 [2024-11-05 16:39:09.019290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.059306] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.646 [2024-11-05 16:39:09.059332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.059388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.646 [2024-11-05 16:39:09.059400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.646 #73 NEW cov: 12503 ft: 14957 corp: 31/1396b lim: 100 exec/s: 73 rss: 74Mb L: 43/81 MS: 2 ChangeBinInt-InsertByte- 00:15:04.646 [2024-11-05 16:39:09.099449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.646 [2024-11-05 16:39:09.099476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.099533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.646 [2024-11-05 16:39:09.099547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.646 #74 NEW cov: 12503 ft: 14958 corp: 32/1440b lim: 100 exec/s: 74 rss: 74Mb L: 44/81 MS: 1 InsertByte- 00:15:04.646 [2024-11-05 16:39:09.159875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.646 [2024-11-05 16:39:09.159901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.159947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.646 [2024-11-05 16:39:09.159963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.160009] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:15:04.646 [2024-11-05 16:39:09.160024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.160078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:15:04.646 [2024-11-05 16:39:09.160093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:04.646 #75 NEW cov: 12503 ft: 15012 corp: 33/1521b lim: 100 exec/s: 75 rss: 74Mb L: 81/81 MS: 1 InsertByte- 00:15:04.646 [2024-11-05 16:39:09.199694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.646 [2024-11-05 16:39:09.199723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.646 [2024-11-05 16:39:09.199775] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.646 [2024-11-05 16:39:09.199788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.904 #76 NEW cov: 12503 ft: 15052 corp: 34/1568b lim: 100 exec/s: 76 rss: 74Mb L: 47/81 MS: 1 InsertByte- 00:15:04.904 [2024-11-05 16:39:09.259883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:15:04.904 [2024-11-05 16:39:09.259910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:04.904 [2024-11-05 16:39:09.259964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:15:04.904 [2024-11-05 16:39:09.259977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:04.904 #77 NEW cov: 12503 ft: 15053 corp: 35/1615b lim: 100 exec/s: 38 rss: 74Mb L: 47/81 MS: 1 ChangeBinInt- 00:15:04.904 #77 DONE cov: 12503 ft: 15053 corp: 35/1615b lim: 100 exec/s: 38 rss: 74Mb 00:15:04.904 ###### Recommended dictionary. ###### 00:15:04.904 "\001\000\000p" # Uses: 0 00:15:04.904 "\001\000" # Uses: 0 00:15:04.904 "\377\377\377\377\377\377\377\377" # Uses: 2 00:15:04.904 ###### End of recommended dictionary. ###### 00:15:04.904 Done 77 runs in 2 second(s) 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:04.904 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:04.905 16:39:09 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:15:04.905 [2024-11-05 16:39:09.481247] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:04.905 [2024-11-05 16:39:09.481326] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527807 ] 00:15:05.472 [2024-11-05 16:39:09.821632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.472 [2024-11-05 16:39:09.886950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.472 [2024-11-05 16:39:09.951072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.472 [2024-11-05 16:39:09.967317] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:15:05.472 INFO: Running with entropic power schedule (0xFF, 100). 00:15:05.472 INFO: Seed: 1721676282 00:15:05.472 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:05.472 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:05.472 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:15:05.472 INFO: A corpus is not provided, starting from an empty corpus 00:15:05.472 #2 INITED exec/s: 0 rss: 66Mb 00:15:05.472 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:05.472 This may also happen if the target rejected all inputs we tried so far 00:15:05.472 [2024-11-05 16:39:10.013283] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:05.472 [2024-11-05 16:39:10.013324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:05.472 [2024-11-05 16:39:10.013389] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:05.472 [2024-11-05 16:39:10.013408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:05.472 [2024-11-05 16:39:10.013466] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:15:05.472 [2024-11-05 16:39:10.013488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:05.472 [2024-11-05 16:39:10.013547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:05.472 [2024-11-05 16:39:10.013565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:05.988 NEW_FUNC[1/715]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:15:05.988 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:05.988 #10 NEW cov: 12247 ft: 12246 corp: 2/50b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 3 ChangeBit-CrossOver-InsertRepeatedBytes- 00:15:05.988 [2024-11-05 16:39:10.474397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:05.988 [2024-11-05 16:39:10.474444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:05.988 [2024-11-05 16:39:10.474483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:05.988 [2024-11-05 16:39:10.474501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:05.988 [2024-11-05 16:39:10.474558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744069415108607 len:65536 00:15:05.988 [2024-11-05 16:39:10.474575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:05.988 [2024-11-05 16:39:10.474629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:05.988 [2024-11-05 16:39:10.474645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:05.988 #11 NEW cov: 12367 ft: 12852 corp: 3/99b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 ChangeBinInt- 00:15:05.988 [2024-11-05 16:39:10.534459] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:05.988 [2024-11-05 16:39:10.534491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:05.988 [2024-11-05 16:39:10.534544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:05.988 [2024-11-05 16:39:10.534559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:05.988 [2024-11-05 16:39:10.534614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:05.988 [2024-11-05 16:39:10.534633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:05.988 [2024-11-05 16:39:10.534690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:05.988 [2024-11-05 16:39:10.534706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.247 #17 NEW cov: 12373 ft: 13008 corp: 4/148b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 ChangeBinInt- 00:15:06.247 [2024-11-05 16:39:10.594722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.247 [2024-11-05 16:39:10.594753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.594812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:06.247 [2024-11-05 16:39:10.594826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.594878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:06.247 [2024-11-05 16:39:10.594895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.594948] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:06.247 [2024-11-05 16:39:10.594964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.595019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:15:06.247 [2024-11-05 16:39:10.595036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:06.247 #18 NEW cov: 12458 ft: 13377 corp: 5/198b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 CopyPart- 00:15:06.247 [2024-11-05 16:39:10.654652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356792496577 len:49602 00:15:06.247 [2024-11-05 16:39:10.654683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.654738] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:15:06.247 [2024-11-05 16:39:10.654753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.654808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:15:06.247 [2024-11-05 16:39:10.654825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.247 #21 NEW cov: 12458 ft: 13775 corp: 6/236b lim: 50 exec/s: 0 rss: 73Mb L: 38/50 MS: 3 ChangeBit-ChangeBit-InsertRepeatedBytes- 00:15:06.247 [2024-11-05 16:39:10.694892] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.247 [2024-11-05 16:39:10.694922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.694975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:06.247 [2024-11-05 16:39:10.694990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.695041] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:06.247 [2024-11-05 16:39:10.695059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.695114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:06.247 [2024-11-05 16:39:10.695130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.247 #27 NEW cov: 12458 ft: 13820 corp: 7/285b lim: 50 exec/s: 0 rss: 73Mb L: 49/50 MS: 1 ShuffleBytes- 00:15:06.247 [2024-11-05 16:39:10.734949] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:15770157678700714714 len:56027 00:15:06.247 [2024-11-05 16:39:10.734982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.735037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:15770157678700714714 len:56027 00:15:06.247 [2024-11-05 16:39:10.735051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.735106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:15770157678700714714 len:56027 00:15:06.247 [2024-11-05 16:39:10.735121] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.735175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:15770157678700714714 len:56027 00:15:06.247 [2024-11-05 16:39:10.735192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.247 #30 NEW cov: 12458 ft: 13948 corp: 8/331b lim: 50 exec/s: 0 rss: 73Mb L: 46/50 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:15:06.247 [2024-11-05 16:39:10.774857] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069498470399 len:65536 00:15:06.247 [2024-11-05 16:39:10.774886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.247 [2024-11-05 16:39:10.774945] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:06.247 [2024-11-05 16:39:10.774963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.248 #33 NEW cov: 12458 ft: 14215 corp: 9/357b lim: 50 exec/s: 0 rss: 73Mb L: 26/50 MS: 3 ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:15:06.248 [2024-11-05 16:39:10.815338] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.248 [2024-11-05 16:39:10.815370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.248 [2024-11-05 16:39:10.815420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:06.248 [2024-11-05 16:39:10.815437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.248 [2024-11-05 16:39:10.815491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084782591 len:65536 00:15:06.248 [2024-11-05 16:39:10.815509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.248 [2024-11-05 16:39:10.815563] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:06.248 [2024-11-05 16:39:10.815579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.248 [2024-11-05 16:39:10.815635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:15:06.248 [2024-11-05 16:39:10.815651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:06.505 #34 NEW cov: 12458 ft: 14243 corp: 10/407b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 ChangeByte- 00:15:06.505 [2024-11-05 16:39:10.875226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069498470399 len:65536 00:15:06.505 [2024-11-05 16:39:10.875256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:10.875314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073692905727 len:65536 00:15:06.505 [2024-11-05 16:39:10.875330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.505 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:06.505 #35 NEW cov: 12481 ft: 14323 corp: 11/433b lim: 50 exec/s: 0 rss: 73Mb L: 26/50 MS: 1 ChangeBinInt- 00:15:06.505 [2024-11-05 16:39:10.935638] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.505 [2024-11-05 16:39:10.935670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:10.935723] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:06.505 [2024-11-05 16:39:10.935740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:10.935796] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:06.505 [2024-11-05 16:39:10.935814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:10.935871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:06.505 [2024-11-05 16:39:10.935889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.505 #36 NEW cov: 12481 ft: 14328 corp: 12/482b lim: 50 exec/s: 0 rss: 74Mb L: 49/50 MS: 1 ChangeByte- 00:15:06.505 [2024-11-05 16:39:10.995628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356792496577 len:49602 00:15:06.505 [2024-11-05 16:39:10.995657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:10.995718] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13907158262941336001 len:49602 00:15:06.505 [2024-11-05 16:39:10.995733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:10.995789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:15:06.505 [2024-11-05 16:39:10.995807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.505 #37 NEW cov: 12481 ft: 14400 corp: 13/520b lim: 50 exec/s: 37 rss: 74Mb L: 38/50 MS: 1 ChangeBinInt- 00:15:06.505 [2024-11-05 16:39:11.055657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069498470399 len:65536 00:15:06.505 [2024-11-05 16:39:11.055686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.505 [2024-11-05 16:39:11.055746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:06.505 [2024-11-05 16:39:11.055764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.505 #38 NEW cov: 12481 ft: 14475 corp: 14/546b lim: 50 exec/s: 38 rss: 74Mb L: 26/50 MS: 1 CopyPart- 00:15:06.763 [2024-11-05 16:39:11.096152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.763 [2024-11-05 16:39:11.096180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.096234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:06.763 [2024-11-05 16:39:11.096251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.096303] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:06.763 [2024-11-05 16:39:11.096321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.096374] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446462603027808255 len:1 00:15:06.763 [2024-11-05 16:39:11.096392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.096446] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:15:06.763 [2024-11-05 16:39:11.096463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:06.763 #39 NEW cov: 12481 ft: 14523 corp: 15/596b lim: 50 exec/s: 39 rss: 74Mb L: 50/50 MS: 1 CMP- DE: "\000\000\000\000"- 00:15:06.763 [2024-11-05 16:39:11.136125] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.763 [2024-11-05 16:39:11.136153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.136200] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:06.763 [2024-11-05 16:39:11.136217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.136267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:06.763 [2024-11-05 16:39:11.136283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.136340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18413180115472089087 len:49664 00:15:06.763 [2024-11-05 16:39:11.136356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.763 #40 NEW cov: 12481 ft: 14539 corp: 16/638b lim: 50 exec/s: 40 rss: 74Mb L: 42/50 MS: 1 CrossOver- 00:15:06.763 [2024-11-05 16:39:11.176253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.763 [2024-11-05 16:39:11.176282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.176334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:06.763 [2024-11-05 16:39:11.176351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.176404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:15:06.763 [2024-11-05 16:39:11.176420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.176474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65408 00:15:06.763 [2024-11-05 16:39:11.176491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:06.763 #41 NEW cov: 12481 ft: 14546 corp: 17/687b lim: 50 exec/s: 41 rss: 74Mb L: 49/50 MS: 1 ChangeBit- 00:15:06.763 [2024-11-05 16:39:11.216100] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.763 [2024-11-05 16:39:11.216129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.216188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:65536 00:15:06.763 [2024-11-05 16:39:11.216205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.763 #42 NEW cov: 12481 ft: 14608 corp: 18/713b lim: 50 exec/s: 42 rss: 74Mb L: 26/50 MS: 1 EraseBytes- 00:15:06.763 [2024-11-05 16:39:11.256366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356792496577 len:49602 00:15:06.763 [2024-11-05 16:39:11.256395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.256449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:15:06.763 [2024-11-05 16:39:11.256463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.256517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:257 00:15:06.763 [2024-11-05 16:39:11.256535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.763 #43 NEW cov: 12481 ft: 14624 corp: 19/751b lim: 50 exec/s: 43 rss: 74Mb L: 38/50 MS: 1 CMP- DE: "\001\000\000\037"- 00:15:06.763 [2024-11-05 16:39:11.296501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:13961653356792496577 len:49602 00:15:06.763 [2024-11-05 16:39:11.296530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.296584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:13961653357748797889 len:49602 00:15:06.763 [2024-11-05 16:39:11.296598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.296651] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:13961653357748797889 len:49602 00:15:06.763 [2024-11-05 16:39:11.296667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:06.763 #44 NEW cov: 12481 ft: 14628 corp: 20/789b lim: 50 exec/s: 44 rss: 74Mb L: 38/50 MS: 1 ShuffleBytes- 00:15:06.763 [2024-11-05 16:39:11.336626] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:06.763 [2024-11-05 16:39:11.336655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:06.763 [2024-11-05 16:39:11.336708] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:06.764 [2024-11-05 16:39:11.336729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:06.764 [2024-11-05 16:39:11.336785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:15:06.764 [2024-11-05 16:39:11.336803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.023 #45 NEW cov: 12481 ft: 14638 corp: 21/828b lim: 50 exec/s: 45 rss: 74Mb L: 39/50 MS: 1 EraseBytes- 00:15:07.023 [2024-11-05 16:39:11.377019] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.023 [2024-11-05 16:39:11.377049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.377119] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:17726168133330272255 len:1 00:15:07.023 [2024-11-05 16:39:11.377136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.377192] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:07.023 [2024-11-05 16:39:11.377208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.377263] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.377281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.377339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.377356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:07.023 #46 NEW cov: 12481 ft: 14675 corp: 22/878b lim: 50 exec/s: 46 rss: 74Mb L: 50/50 MS: 1 ChangeBinInt- 00:15:07.023 [2024-11-05 16:39:11.417010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.023 [2024-11-05 16:39:11.417042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.417098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.417112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.417165] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.417182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.417238] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65408 00:15:07.023 [2024-11-05 16:39:11.417255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.023 #47 NEW cov: 12481 ft: 14689 corp: 23/927b lim: 50 exec/s: 47 rss: 74Mb L: 49/50 MS: 1 CrossOver- 00:15:07.023 [2024-11-05 16:39:11.477295] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.023 [2024-11-05 16:39:11.477327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.477377] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:07.023 [2024-11-05 16:39:11.477394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.477449] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744069415108607 len:65536 00:15:07.023 [2024-11-05 16:39:11.477465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.477519] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:16896 00:15:07.023 [2024-11-05 16:39:11.477535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.477588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.477606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:07.023 #48 NEW cov: 12481 ft: 14706 corp: 24/977b lim: 50 exec/s: 48 rss: 74Mb L: 50/50 MS: 1 InsertByte- 00:15:07.023 [2024-11-05 16:39:11.517254] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071732658175 len:65536 00:15:07.023 [2024-11-05 16:39:11.517284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.517339] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:07.023 [2024-11-05 16:39:11.517354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.517410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:07.023 [2024-11-05 16:39:11.517426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.517480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.517496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.023 #49 NEW cov: 12481 ft: 14716 corp: 25/1026b lim: 50 exec/s: 49 rss: 74Mb L: 49/50 MS: 1 ChangeBit- 00:15:07.023 [2024-11-05 16:39:11.557527] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.023 [2024-11-05 16:39:11.557557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.557607] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:07.023 [2024-11-05 16:39:11.557623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.557673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:07.023 [2024-11-05 16:39:11.557689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.557742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.557758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.557812] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.557830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:07.023 #50 NEW cov: 12481 ft: 14724 corp: 26/1076b lim: 50 exec/s: 50 rss: 74Mb L: 50/50 MS: 1 ShuffleBytes- 00:15:07.023 [2024-11-05 16:39:11.597234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069498470399 len:65536 00:15:07.023 [2024-11-05 16:39:11.597265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.023 [2024-11-05 16:39:11.597319] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:07.023 [2024-11-05 16:39:11.597336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.282 #51 NEW cov: 12481 ft: 14755 corp: 27/1102b lim: 50 exec/s: 51 rss: 74Mb L: 26/50 MS: 1 CMP- DE: "\037\000"- 00:15:07.282 [2024-11-05 16:39:11.637380] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744069498470399 len:65536 00:15:07.282 [2024-11-05 16:39:11.637410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.637462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446655013267701759 len:65536 00:15:07.282 [2024-11-05 16:39:11.637476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.282 #52 NEW cov: 12481 ft: 14787 corp: 28/1129b lim: 50 exec/s: 52 rss: 74Mb L: 27/50 MS: 1 InsertByte- 00:15:07.282 [2024-11-05 16:39:11.677735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.282 [2024-11-05 16:39:11.677765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.677816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:07.282 [2024-11-05 16:39:11.677833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.677887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575051321573375 len:1 00:15:07.282 [2024-11-05 16:39:11.677904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.677962] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744069431361535 len:65536 00:15:07.282 [2024-11-05 16:39:11.677979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.282 #53 NEW cov: 12481 ft: 14796 corp: 29/1178b lim: 50 exec/s: 53 rss: 74Mb L: 49/50 MS: 1 ChangeBinInt- 00:15:07.282 [2024-11-05 16:39:11.718033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071732658175 len:65536 00:15:07.282 [2024-11-05 16:39:11.718062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.718111] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:72057598332895231 len:1 00:15:07.282 [2024-11-05 16:39:11.718127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.718181] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:07.282 [2024-11-05 16:39:11.718197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.718251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:07.282 [2024-11-05 16:39:11.718266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.718324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18383130728972943359 len:65536 00:15:07.282 [2024-11-05 16:39:11.718345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.778287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071732658175 len:65536 00:15:07.282 [2024-11-05 16:39:11.778317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.778366] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:143833717394112256 len:1 00:15:07.282 [2024-11-05 16:39:11.778383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.778441] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:720575936084836351 len:65536 00:15:07.282 [2024-11-05 16:39:11.778458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.778513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:15:07.282 [2024-11-05 16:39:11.778530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.778588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18383130728972943359 len:65536 00:15:07.282 [2024-11-05 16:39:11.778605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:07.282 #55 NEW cov: 12481 ft: 14811 corp: 30/1228b lim: 50 exec/s: 55 rss: 74Mb L: 50/50 MS: 2 InsertByte-ShuffleBytes- 00:15:07.282 [2024-11-05 16:39:11.818046] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.282 [2024-11-05 16:39:11.818075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.282 [2024-11-05 16:39:11.818114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18375248334408384511 len:65281 00:15:07.282 [2024-11-05 16:39:11.818131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.282 #56 NEW cov: 12481 ft: 14826 corp: 31/1254b lim: 50 exec/s: 56 rss: 74Mb L: 26/50 MS: 1 ShuffleBytes- 00:15:07.541 [2024-11-05 16:39:11.878362] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446463698328354815 len:32 00:15:07.541 [2024-11-05 16:39:11.878392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.541 [2024-11-05 16:39:11.878444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:07.541 [2024-11-05 16:39:11.878458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.541 [2024-11-05 16:39:11.878511] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:2234066890152476671 len:65536 00:15:07.541 [2024-11-05 16:39:11.878528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.541 #57 NEW cov: 12481 ft: 14835 corp: 32/1284b lim: 50 exec/s: 57 rss: 74Mb L: 30/50 MS: 1 PersAutoDict- DE: "\001\000\000\037"- 00:15:07.541 [2024-11-05 16:39:11.938625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744071730561023 len:65536 00:15:07.541 [2024-11-05 16:39:11.938655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.541 [2024-11-05 16:39:11.938720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:07.541 [2024-11-05 16:39:11.938735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.541 [2024-11-05 16:39:11.938788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:15:07.541 [2024-11-05 16:39:11.938805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:07.541 [2024-11-05 16:39:11.938859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65408 00:15:07.541 [2024-11-05 16:39:11.938876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:07.541 #58 NEW cov: 12481 ft: 14841 corp: 33/1333b lim: 50 exec/s: 58 rss: 74Mb L: 49/50 MS: 1 ShuffleBytes- 00:15:07.541 [2024-11-05 16:39:11.998571] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446475788661293055 len:65536 00:15:07.541 [2024-11-05 16:39:11.998603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:07.541 [2024-11-05 16:39:11.998648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:15:07.541 [2024-11-05 16:39:11.998666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:07.541 #59 NEW cov: 12481 ft: 14859 corp: 34/1359b lim: 50 exec/s: 29 rss: 74Mb L: 26/50 MS: 1 ChangeByte- 00:15:07.541 #59 DONE cov: 12481 ft: 14859 corp: 34/1359b lim: 50 exec/s: 29 rss: 74Mb 00:15:07.541 ###### Recommended dictionary. ###### 00:15:07.541 "\000\000\000\000" # Uses: 0 00:15:07.541 "\001\000\000\037" # Uses: 1 00:15:07.541 "\037\000" # Uses: 0 00:15:07.541 ###### End of recommended dictionary. ###### 00:15:07.541 Done 59 runs in 2 second(s) 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:07.800 16:39:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:15:07.800 [2024-11-05 16:39:12.202459] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:07.800 [2024-11-05 16:39:12.202542] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528178 ] 00:15:08.059 [2024-11-05 16:39:12.553928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.059 [2024-11-05 16:39:12.611890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.317 [2024-11-05 16:39:12.675883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.317 [2024-11-05 16:39:12.692120] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:08.317 INFO: Running with entropic power schedule (0xFF, 100). 00:15:08.317 INFO: Seed: 152701881 00:15:08.317 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:08.317 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:08.317 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:15:08.317 INFO: A corpus is not provided, starting from an empty corpus 00:15:08.317 #2 INITED exec/s: 0 rss: 66Mb 00:15:08.317 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:08.317 This may also happen if the target rejected all inputs we tried so far 00:15:08.317 [2024-11-05 16:39:12.737799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.317 [2024-11-05 16:39:12.737834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.317 [2024-11-05 16:39:12.737879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.317 [2024-11-05 16:39:12.737896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.575 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:15:08.575 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:08.576 #15 NEW cov: 12303 ft: 12294 corp: 2/43b lim: 90 exec/s: 0 rss: 73Mb L: 42/42 MS: 3 InsertByte-CopyPart-InsertRepeatedBytes- 00:15:08.576 [2024-11-05 16:39:13.058630] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.576 [2024-11-05 16:39:13.058669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.576 [2024-11-05 16:39:13.058702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.576 [2024-11-05 16:39:13.058723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.576 NEW_FUNC[1/1]: 0x17d6a58 in nvme_ctrlr_get_ready_timeout /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_ctrlr.c:1292 00:15:08.576 #16 NEW cov: 12425 ft: 12760 corp: 3/85b lim: 90 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 ChangeBinInt- 00:15:08.576 [2024-11-05 16:39:13.118953] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.576 [2024-11-05 16:39:13.118982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.576 [2024-11-05 16:39:13.119029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.576 [2024-11-05 16:39:13.119048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.576 [2024-11-05 16:39:13.119100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:08.576 [2024-11-05 16:39:13.119115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:08.576 [2024-11-05 16:39:13.119167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:08.576 [2024-11-05 16:39:13.119182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:08.576 #20 NEW cov: 12431 ft: 13445 corp: 4/166b lim: 90 exec/s: 0 rss: 73Mb L: 81/81 MS: 4 InsertByte-CrossOver-EraseBytes-InsertRepeatedBytes- 00:15:08.576 [2024-11-05 16:39:13.158767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.576 [2024-11-05 16:39:13.158795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.576 [2024-11-05 16:39:13.158847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.576 [2024-11-05 16:39:13.158863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.834 #26 NEW cov: 12516 ft: 13733 corp: 5/212b lim: 90 exec/s: 0 rss: 73Mb L: 46/81 MS: 1 CrossOver- 00:15:08.834 [2024-11-05 16:39:13.199180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.834 [2024-11-05 16:39:13.199209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.834 [2024-11-05 16:39:13.199261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.834 [2024-11-05 16:39:13.199277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.834 [2024-11-05 16:39:13.199327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:08.834 [2024-11-05 16:39:13.199341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:08.834 [2024-11-05 16:39:13.199395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:08.834 [2024-11-05 16:39:13.199411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:08.835 #27 NEW cov: 12516 ft: 13790 corp: 6/293b lim: 90 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 CopyPart- 00:15:08.835 [2024-11-05 16:39:13.259358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.835 [2024-11-05 16:39:13.259386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.259434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.835 [2024-11-05 16:39:13.259451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.259497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:08.835 [2024-11-05 16:39:13.259514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.259568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:08.835 [2024-11-05 16:39:13.259584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:08.835 #28 NEW cov: 12516 ft: 13844 corp: 7/371b lim: 90 exec/s: 0 rss: 73Mb L: 78/81 MS: 1 InsertRepeatedBytes- 00:15:08.835 [2024-11-05 16:39:13.299446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.835 [2024-11-05 16:39:13.299474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.299526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.835 [2024-11-05 16:39:13.299541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.299595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:08.835 [2024-11-05 16:39:13.299612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.299669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:08.835 [2024-11-05 16:39:13.299687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:08.835 #29 NEW cov: 12516 ft: 14024 corp: 8/452b lim: 90 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 ChangeByte- 00:15:08.835 [2024-11-05 16:39:13.359612] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.835 [2024-11-05 16:39:13.359641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.359699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.835 [2024-11-05 16:39:13.359727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.359807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:08.835 [2024-11-05 16:39:13.359825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.359879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:08.835 [2024-11-05 16:39:13.359896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:08.835 #30 NEW cov: 12516 ft: 14050 corp: 9/533b lim: 90 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 ShuffleBytes- 00:15:08.835 [2024-11-05 16:39:13.399747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:08.835 [2024-11-05 16:39:13.399776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.399825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:08.835 [2024-11-05 16:39:13.399842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.399896] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:08.835 [2024-11-05 16:39:13.399912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:08.835 [2024-11-05 16:39:13.399966] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:08.835 [2024-11-05 16:39:13.399983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.094 #31 NEW cov: 12516 ft: 14117 corp: 10/614b lim: 90 exec/s: 0 rss: 73Mb L: 81/81 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\002"- 00:15:09.094 [2024-11-05 16:39:13.459949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.094 [2024-11-05 16:39:13.459980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.094 [2024-11-05 16:39:13.460032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.094 [2024-11-05 16:39:13.460046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.094 [2024-11-05 16:39:13.460097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.094 [2024-11-05 16:39:13.460114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.094 [2024-11-05 16:39:13.460168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.094 [2024-11-05 16:39:13.460184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.094 #32 NEW cov: 12516 ft: 14177 corp: 11/692b lim: 90 exec/s: 0 rss: 73Mb L: 78/81 MS: 1 CrossOver- 00:15:09.094 [2024-11-05 16:39:13.500215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.094 [2024-11-05 16:39:13.500245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.094 [2024-11-05 16:39:13.500293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.094 [2024-11-05 16:39:13.500310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.094 [2024-11-05 16:39:13.500361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.094 [2024-11-05 16:39:13.500378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.094 [2024-11-05 16:39:13.500430] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.095 [2024-11-05 16:39:13.500446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.500499] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:15:09.095 [2024-11-05 16:39:13.500515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:09.095 #33 NEW cov: 12516 ft: 14237 corp: 12/782b lim: 90 exec/s: 0 rss: 73Mb L: 90/90 MS: 1 InsertRepeatedBytes- 00:15:09.095 [2024-11-05 16:39:13.540156] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.095 [2024-11-05 16:39:13.540182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.540233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.095 [2024-11-05 16:39:13.540249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.540303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.095 [2024-11-05 16:39:13.540318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.540372] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.095 [2024-11-05 16:39:13.540388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.095 #34 NEW cov: 12516 ft: 14299 corp: 13/861b lim: 90 exec/s: 0 rss: 73Mb L: 79/90 MS: 1 InsertByte- 00:15:09.095 [2024-11-05 16:39:13.600336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.095 [2024-11-05 16:39:13.600365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.600419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.095 [2024-11-05 16:39:13.600433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.600489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.095 [2024-11-05 16:39:13.600506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.095 [2024-11-05 16:39:13.600561] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.095 [2024-11-05 16:39:13.600577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.095 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:09.095 #35 NEW cov: 12539 ft: 14368 corp: 14/937b lim: 90 exec/s: 0 rss: 73Mb L: 76/90 MS: 1 EraseBytes- 00:15:09.095 [2024-11-05 16:39:13.639940] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.095 [2024-11-05 16:39:13.639967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.354 #36 NEW cov: 12539 ft: 15149 corp: 15/960b lim: 90 exec/s: 0 rss: 73Mb L: 23/90 MS: 1 EraseBytes- 00:15:09.354 [2024-11-05 16:39:13.700350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.354 [2024-11-05 16:39:13.700377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.700435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.354 [2024-11-05 16:39:13.700451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.354 #37 NEW cov: 12539 ft: 15181 corp: 16/996b lim: 90 exec/s: 0 rss: 73Mb L: 36/90 MS: 1 EraseBytes- 00:15:09.354 [2024-11-05 16:39:13.740770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.354 [2024-11-05 16:39:13.740797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.740844] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.354 [2024-11-05 16:39:13.740860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.740900] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.354 [2024-11-05 16:39:13.740917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.740968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.354 [2024-11-05 16:39:13.740984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.354 #38 NEW cov: 12539 ft: 15214 corp: 17/1085b lim: 90 exec/s: 38 rss: 73Mb L: 89/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:15:09.354 [2024-11-05 16:39:13.780931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.354 [2024-11-05 16:39:13.780958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.781007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.354 [2024-11-05 16:39:13.781024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.781075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.354 [2024-11-05 16:39:13.781091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.781146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.354 [2024-11-05 16:39:13.781162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.354 #39 NEW cov: 12539 ft: 15266 corp: 18/1163b lim: 90 exec/s: 39 rss: 73Mb L: 78/90 MS: 1 ChangeBinInt- 00:15:09.354 [2024-11-05 16:39:13.821001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.354 [2024-11-05 16:39:13.821029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.821082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.354 [2024-11-05 16:39:13.821095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.821146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.354 [2024-11-05 16:39:13.821162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.821216] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.354 [2024-11-05 16:39:13.821232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.354 #40 NEW cov: 12539 ft: 15305 corp: 19/1244b lim: 90 exec/s: 40 rss: 73Mb L: 81/90 MS: 1 CopyPart- 00:15:09.354 [2024-11-05 16:39:13.881186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.354 [2024-11-05 16:39:13.881215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.881265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.354 [2024-11-05 16:39:13.881283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.881337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.354 [2024-11-05 16:39:13.881356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.354 [2024-11-05 16:39:13.881412] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.354 [2024-11-05 16:39:13.881429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.354 #41 NEW cov: 12539 ft: 15320 corp: 20/1322b lim: 90 exec/s: 41 rss: 74Mb L: 78/90 MS: 1 ChangeBinInt- 00:15:09.354 [2024-11-05 16:39:13.920980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.354 [2024-11-05 16:39:13.921007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.355 [2024-11-05 16:39:13.921087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.355 [2024-11-05 16:39:13.921110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.613 #42 NEW cov: 12539 ft: 15336 corp: 21/1368b lim: 90 exec/s: 42 rss: 74Mb L: 46/90 MS: 1 EraseBytes- 00:15:09.613 [2024-11-05 16:39:13.981468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.613 [2024-11-05 16:39:13.981496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:13.981549] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.613 [2024-11-05 16:39:13.981565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:13.981618] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.613 [2024-11-05 16:39:13.981634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:13.981690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.613 [2024-11-05 16:39:13.981706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.613 #43 NEW cov: 12539 ft: 15356 corp: 22/1457b lim: 90 exec/s: 43 rss: 74Mb L: 89/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:15:09.613 [2024-11-05 16:39:14.021575] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.613 [2024-11-05 16:39:14.021603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.021650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.613 [2024-11-05 16:39:14.021668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.021717] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.613 [2024-11-05 16:39:14.021749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.021803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.613 [2024-11-05 16:39:14.021819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.613 #44 NEW cov: 12539 ft: 15374 corp: 23/1538b lim: 90 exec/s: 44 rss: 74Mb L: 81/90 MS: 1 ShuffleBytes- 00:15:09.613 [2024-11-05 16:39:14.061901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.613 [2024-11-05 16:39:14.061928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.061992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.613 [2024-11-05 16:39:14.062008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.062056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.613 [2024-11-05 16:39:14.062073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.062126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.613 [2024-11-05 16:39:14.062143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.062197] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:15:09.613 [2024-11-05 16:39:14.062214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:09.613 #45 NEW cov: 12539 ft: 15384 corp: 24/1628b lim: 90 exec/s: 45 rss: 74Mb L: 90/90 MS: 1 ChangeBit- 00:15:09.613 [2024-11-05 16:39:14.121878] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.613 [2024-11-05 16:39:14.121907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.121957] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.613 [2024-11-05 16:39:14.121973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.122027] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.613 [2024-11-05 16:39:14.122043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.613 [2024-11-05 16:39:14.122100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.613 [2024-11-05 16:39:14.122116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.614 #46 NEW cov: 12539 ft: 15393 corp: 25/1710b lim: 90 exec/s: 46 rss: 74Mb L: 82/90 MS: 1 InsertByte- 00:15:09.614 [2024-11-05 16:39:14.161997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.614 [2024-11-05 16:39:14.162025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.614 [2024-11-05 16:39:14.162074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.614 [2024-11-05 16:39:14.162091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.614 [2024-11-05 16:39:14.162143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.614 [2024-11-05 16:39:14.162161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.614 [2024-11-05 16:39:14.162217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.614 [2024-11-05 16:39:14.162234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.872 #47 NEW cov: 12539 ft: 15483 corp: 26/1799b lim: 90 exec/s: 47 rss: 74Mb L: 89/90 MS: 1 CrossOver- 00:15:09.872 [2024-11-05 16:39:14.221661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.872 [2024-11-05 16:39:14.221686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.872 #48 NEW cov: 12539 ft: 15532 corp: 27/1822b lim: 90 exec/s: 48 rss: 74Mb L: 23/90 MS: 1 ChangeBinInt- 00:15:09.872 [2024-11-05 16:39:14.282317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.872 [2024-11-05 16:39:14.282344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.282392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.872 [2024-11-05 16:39:14.282408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.282449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.872 [2024-11-05 16:39:14.282465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.282521] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.872 [2024-11-05 16:39:14.282540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.872 #49 NEW cov: 12539 ft: 15542 corp: 28/1908b lim: 90 exec/s: 49 rss: 74Mb L: 86/90 MS: 1 InsertRepeatedBytes- 00:15:09.872 [2024-11-05 16:39:14.322394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.872 [2024-11-05 16:39:14.322421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.322468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.872 [2024-11-05 16:39:14.322484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.322523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.872 [2024-11-05 16:39:14.322540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.322593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.872 [2024-11-05 16:39:14.322609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.872 #50 NEW cov: 12539 ft: 15566 corp: 29/1997b lim: 90 exec/s: 50 rss: 74Mb L: 89/90 MS: 1 ChangeByte- 00:15:09.872 [2024-11-05 16:39:14.382597] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.872 [2024-11-05 16:39:14.382625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.382675] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.872 [2024-11-05 16:39:14.382693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.382745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.872 [2024-11-05 16:39:14.382761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.382814] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.872 [2024-11-05 16:39:14.382830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:09.872 #51 NEW cov: 12539 ft: 15582 corp: 30/2086b lim: 90 exec/s: 51 rss: 74Mb L: 89/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:15:09.872 [2024-11-05 16:39:14.422701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:09.872 [2024-11-05 16:39:14.422731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.422780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:09.872 [2024-11-05 16:39:14.422796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.422835] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:09.872 [2024-11-05 16:39:14.422852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:09.872 [2024-11-05 16:39:14.422907] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:09.872 [2024-11-05 16:39:14.422924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:10.131 #52 NEW cov: 12539 ft: 15611 corp: 31/2175b lim: 90 exec/s: 52 rss: 74Mb L: 89/90 MS: 1 InsertRepeatedBytes- 00:15:10.131 [2024-11-05 16:39:14.482939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:10.131 [2024-11-05 16:39:14.482967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.483029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:10.131 [2024-11-05 16:39:14.483050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.483123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:10.131 [2024-11-05 16:39:14.483143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.483201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:10.131 [2024-11-05 16:39:14.483221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:10.131 #53 NEW cov: 12539 ft: 15622 corp: 32/2256b lim: 90 exec/s: 53 rss: 74Mb L: 81/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:15:10.131 [2024-11-05 16:39:14.543103] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:10.131 [2024-11-05 16:39:14.543131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.543180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:10.131 [2024-11-05 16:39:14.543196] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.543247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:10.131 [2024-11-05 16:39:14.543263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.543315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:10.131 [2024-11-05 16:39:14.543332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:10.131 #54 NEW cov: 12539 ft: 15627 corp: 33/2345b lim: 90 exec/s: 54 rss: 74Mb L: 89/90 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:15:10.131 [2024-11-05 16:39:14.602731] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:10.131 [2024-11-05 16:39:14.602757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:10.131 #55 NEW cov: 12539 ft: 15670 corp: 34/2368b lim: 90 exec/s: 55 rss: 74Mb L: 23/90 MS: 1 ChangeBit- 00:15:10.131 [2024-11-05 16:39:14.643318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:10.131 [2024-11-05 16:39:14.643345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.643390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:10.131 [2024-11-05 16:39:14.643406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:10.131 [2024-11-05 16:39:14.643445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:10.132 [2024-11-05 16:39:14.643475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:10.132 [2024-11-05 16:39:14.643531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:10.132 [2024-11-05 16:39:14.643550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:10.132 #56 NEW cov: 12539 ft: 15683 corp: 35/2449b lim: 90 exec/s: 56 rss: 74Mb L: 81/90 MS: 1 ChangeBit- 00:15:10.132 [2024-11-05 16:39:14.683410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:10.132 [2024-11-05 16:39:14.683438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:10.132 [2024-11-05 16:39:14.683489] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:10.132 [2024-11-05 16:39:14.683505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:10.132 [2024-11-05 16:39:14.683560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:10.132 [2024-11-05 16:39:14.683583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:10.132 [2024-11-05 16:39:14.683637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:10.132 [2024-11-05 16:39:14.683654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:10.132 #57 NEW cov: 12539 ft: 15706 corp: 36/2531b lim: 90 exec/s: 57 rss: 74Mb L: 82/90 MS: 1 InsertByte- 00:15:10.390 [2024-11-05 16:39:14.723740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:15:10.390 [2024-11-05 16:39:14.723768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:10.390 [2024-11-05 16:39:14.723820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:15:10.390 [2024-11-05 16:39:14.723838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:10.390 [2024-11-05 16:39:14.723873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:15:10.390 [2024-11-05 16:39:14.723888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:10.390 [2024-11-05 16:39:14.723942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:15:10.390 [2024-11-05 16:39:14.723959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:10.390 [2024-11-05 16:39:14.724013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:4 nsid:0 00:15:10.390 [2024-11-05 16:39:14.724029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:10.390 #58 NEW cov: 12539 ft: 15714 corp: 37/2621b lim: 90 exec/s: 29 rss: 74Mb L: 90/90 MS: 1 InsertByte- 00:15:10.390 #58 DONE cov: 12539 ft: 15714 corp: 37/2621b lim: 90 exec/s: 29 rss: 74Mb 00:15:10.390 ###### Recommended dictionary. ###### 00:15:10.390 "\001\000\000\000\000\000\000\002" # Uses: 5 00:15:10.390 ###### End of recommended dictionary. ###### 00:15:10.390 Done 58 runs in 2 second(s) 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:10.390 16:39:14 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:15:10.390 [2024-11-05 16:39:14.926737] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:10.390 [2024-11-05 16:39:14.926820] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528553 ] 00:15:10.956 [2024-11-05 16:39:15.258497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.956 [2024-11-05 16:39:15.324540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.956 [2024-11-05 16:39:15.388781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.956 [2024-11-05 16:39:15.405022] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:15:10.956 INFO: Running with entropic power schedule (0xFF, 100). 00:15:10.956 INFO: Seed: 2863705830 00:15:10.956 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:10.956 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:10.956 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:15:10.956 INFO: A corpus is not provided, starting from an empty corpus 00:15:10.956 #2 INITED exec/s: 0 rss: 66Mb 00:15:10.956 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:10.956 This may also happen if the target rejected all inputs we tried so far 00:15:10.956 [2024-11-05 16:39:15.454486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:10.956 [2024-11-05 16:39:15.454520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.523 NEW_FUNC[1/717]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:15:11.523 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:11.523 #7 NEW cov: 12287 ft: 12283 corp: 2/13b lim: 50 exec/s: 0 rss: 73Mb L: 12/12 MS: 5 ChangeBit-ChangeByte-InsertRepeatedBytes-ChangeBit-CMP- DE: "\001\000\000\000\000\000\000<"- 00:15:11.523 [2024-11-05 16:39:15.915650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.523 [2024-11-05 16:39:15.915693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.523 #8 NEW cov: 12400 ft: 12840 corp: 3/32b lim: 50 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:15:11.523 [2024-11-05 16:39:15.955633] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.523 [2024-11-05 16:39:15.955662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.523 #9 NEW cov: 12406 ft: 13128 corp: 4/51b lim: 50 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:15:11.523 [2024-11-05 16:39:16.015874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.523 [2024-11-05 16:39:16.015904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.523 #10 NEW cov: 12491 ft: 13522 corp: 5/70b lim: 50 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000<"- 00:15:11.523 [2024-11-05 16:39:16.056092] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.523 [2024-11-05 16:39:16.056120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.523 [2024-11-05 16:39:16.056181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:11.523 [2024-11-05 16:39:16.056197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:11.523 #11 NEW cov: 12491 ft: 14303 corp: 6/99b lim: 50 exec/s: 0 rss: 73Mb L: 29/29 MS: 1 CopyPart- 00:15:11.781 [2024-11-05 16:39:16.116445] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.781 [2024-11-05 16:39:16.116474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.781 [2024-11-05 16:39:16.116531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:11.781 [2024-11-05 16:39:16.116545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:11.781 [2024-11-05 16:39:16.116600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:11.781 [2024-11-05 16:39:16.116616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:11.781 #12 NEW cov: 12491 ft: 14658 corp: 7/134b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:15:11.781 [2024-11-05 16:39:16.156531] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.781 [2024-11-05 16:39:16.156558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.781 [2024-11-05 16:39:16.156615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:11.781 [2024-11-05 16:39:16.156629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:11.781 [2024-11-05 16:39:16.156686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:11.781 [2024-11-05 16:39:16.156704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:11.781 #13 NEW cov: 12491 ft: 14782 corp: 8/169b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeByte- 00:15:11.781 [2024-11-05 16:39:16.216349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.782 [2024-11-05 16:39:16.216378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.782 #14 NEW cov: 12491 ft: 14830 corp: 9/181b lim: 50 exec/s: 0 rss: 73Mb L: 12/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000<"- 00:15:11.782 [2024-11-05 16:39:16.276906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.782 [2024-11-05 16:39:16.276934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.782 [2024-11-05 16:39:16.276995] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:11.782 [2024-11-05 16:39:16.277013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:11.782 [2024-11-05 16:39:16.277073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:11.782 [2024-11-05 16:39:16.277091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:11.782 #15 NEW cov: 12491 ft: 14860 corp: 10/216b lim: 50 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:15:11.782 [2024-11-05 16:39:16.336702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:11.782 [2024-11-05 16:39:16.336736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:11.782 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:11.782 #23 NEW cov: 12514 ft: 14943 corp: 11/234b lim: 50 exec/s: 0 rss: 74Mb L: 18/35 MS: 3 ChangeByte-PersAutoDict-InsertRepeatedBytes- DE: "\001\000\000\000\000\000\000<"- 00:15:12.041 [2024-11-05 16:39:16.376973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.041 [2024-11-05 16:39:16.377000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.041 [2024-11-05 16:39:16.377056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.041 [2024-11-05 16:39:16.377073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.041 #24 NEW cov: 12514 ft: 14993 corp: 12/259b lim: 50 exec/s: 0 rss: 74Mb L: 25/35 MS: 1 EraseBytes- 00:15:12.041 [2024-11-05 16:39:16.417253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.041 [2024-11-05 16:39:16.417279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.041 [2024-11-05 16:39:16.417338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.041 [2024-11-05 16:39:16.417352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.041 [2024-11-05 16:39:16.417406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.041 [2024-11-05 16:39:16.417422] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.041 #25 NEW cov: 12514 ft: 15003 corp: 13/297b lim: 50 exec/s: 25 rss: 74Mb L: 38/38 MS: 1 InsertRepeatedBytes- 00:15:12.041 [2024-11-05 16:39:16.477100] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.041 [2024-11-05 16:39:16.477127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.041 #29 NEW cov: 12514 ft: 15017 corp: 14/307b lim: 50 exec/s: 29 rss: 74Mb L: 10/38 MS: 4 EraseBytes-CopyPart-InsertByte-InsertByte- 00:15:12.041 [2024-11-05 16:39:16.517389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.041 [2024-11-05 16:39:16.517416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.041 [2024-11-05 16:39:16.517481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.041 [2024-11-05 16:39:16.517501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.041 #30 NEW cov: 12514 ft: 15024 corp: 15/333b lim: 50 exec/s: 30 rss: 74Mb L: 26/38 MS: 1 InsertRepeatedBytes- 00:15:12.041 [2024-11-05 16:39:16.577570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.041 [2024-11-05 16:39:16.577597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.041 [2024-11-05 16:39:16.577656] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.041 [2024-11-05 16:39:16.577672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.041 #31 NEW cov: 12514 ft: 15058 corp: 16/359b lim: 50 exec/s: 31 rss: 74Mb L: 26/38 MS: 1 ShuffleBytes- 00:15:12.300 [2024-11-05 16:39:16.637912] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.300 [2024-11-05 16:39:16.637940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.637997] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.300 [2024-11-05 16:39:16.638010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.638070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.300 [2024-11-05 16:39:16.638087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.300 #32 NEW cov: 12514 ft: 15106 corp: 17/394b lim: 50 exec/s: 32 rss: 74Mb L: 35/38 MS: 1 ShuffleBytes- 00:15:12.300 [2024-11-05 16:39:16.677823] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.300 [2024-11-05 16:39:16.677850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.677911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.300 [2024-11-05 16:39:16.677927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.300 #34 NEW cov: 12514 ft: 15114 corp: 18/419b lim: 50 exec/s: 34 rss: 74Mb L: 25/38 MS: 2 CrossOver-InsertRepeatedBytes- 00:15:12.300 [2024-11-05 16:39:16.718301] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.300 [2024-11-05 16:39:16.718329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.718383] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.300 [2024-11-05 16:39:16.718399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.718455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.300 [2024-11-05 16:39:16.718472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.718532] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:15:12.300 [2024-11-05 16:39:16.718549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:12.300 #35 NEW cov: 12514 ft: 15457 corp: 19/466b lim: 50 exec/s: 35 rss: 74Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:15:12.300 [2024-11-05 16:39:16.758225] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.300 [2024-11-05 16:39:16.758255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.758320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.300 [2024-11-05 16:39:16.758341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.300 [2024-11-05 16:39:16.758400] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.300 [2024-11-05 16:39:16.758420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.300 #36 NEW cov: 12514 ft: 15496 corp: 20/504b lim: 50 exec/s: 36 rss: 74Mb L: 38/47 MS: 1 ChangeByte- 00:15:12.300 [2024-11-05 16:39:16.818449] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.300 [2024-11-05 16:39:16.818477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.301 [2024-11-05 16:39:16.818530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.301 [2024-11-05 16:39:16.818544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.301 [2024-11-05 16:39:16.818603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.301 [2024-11-05 16:39:16.818619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.301 #37 NEW cov: 12514 ft: 15508 corp: 21/539b lim: 50 exec/s: 37 rss: 74Mb L: 35/47 MS: 1 ChangeBinInt- 00:15:12.301 [2024-11-05 16:39:16.878607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.301 [2024-11-05 16:39:16.878633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.301 [2024-11-05 16:39:16.878690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.301 [2024-11-05 16:39:16.878703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.301 [2024-11-05 16:39:16.878763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.301 [2024-11-05 16:39:16.878781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.559 #38 NEW cov: 12514 ft: 15532 corp: 22/578b lim: 50 exec/s: 38 rss: 74Mb L: 39/47 MS: 1 InsertByte- 00:15:12.559 [2024-11-05 16:39:16.938601] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.559 [2024-11-05 16:39:16.938629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.559 [2024-11-05 16:39:16.938692] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.559 [2024-11-05 16:39:16.938709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.559 #39 NEW cov: 12514 ft: 15552 corp: 23/604b lim: 50 exec/s: 39 rss: 74Mb L: 26/47 MS: 1 ChangeBinInt- 00:15:12.559 [2024-11-05 16:39:16.978501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.559 [2024-11-05 16:39:16.978529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.559 #40 NEW cov: 12514 ft: 15610 corp: 24/616b lim: 50 exec/s: 40 rss: 74Mb L: 12/47 MS: 1 ShuffleBytes- 00:15:12.559 [2024-11-05 16:39:17.039095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.559 [2024-11-05 16:39:17.039125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.559 [2024-11-05 16:39:17.039186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.559 [2024-11-05 16:39:17.039202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.559 [2024-11-05 16:39:17.039260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.559 [2024-11-05 16:39:17.039276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.559 #41 NEW cov: 12514 ft: 15618 corp: 25/651b lim: 50 exec/s: 41 rss: 74Mb L: 35/47 MS: 1 ChangeByte- 00:15:12.559 [2024-11-05 16:39:17.079175] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.559 [2024-11-05 16:39:17.079204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.559 [2024-11-05 16:39:17.079263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.559 [2024-11-05 16:39:17.079277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.559 [2024-11-05 16:39:17.079334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.559 [2024-11-05 16:39:17.079351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.559 #42 NEW cov: 12514 ft: 15653 corp: 26/686b lim: 50 exec/s: 42 rss: 74Mb L: 35/47 MS: 1 EraseBytes- 00:15:12.559 [2024-11-05 16:39:17.119106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.559 [2024-11-05 16:39:17.119132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.559 [2024-11-05 16:39:17.119194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.559 [2024-11-05 16:39:17.119211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.559 #43 NEW cov: 12514 ft: 15678 corp: 27/712b lim: 50 exec/s: 43 rss: 74Mb L: 26/47 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000<"- 00:15:12.818 [2024-11-05 16:39:17.159259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.818 [2024-11-05 16:39:17.159285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.159349] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.818 [2024-11-05 16:39:17.159365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.818 #44 NEW cov: 12514 ft: 15713 corp: 28/737b lim: 50 exec/s: 44 rss: 74Mb L: 25/47 MS: 1 ChangeByte- 00:15:12.818 [2024-11-05 16:39:17.219454] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.818 [2024-11-05 16:39:17.219481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.219540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.818 [2024-11-05 16:39:17.219557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.818 #45 NEW cov: 12514 ft: 15726 corp: 29/763b lim: 50 exec/s: 45 rss: 74Mb L: 26/47 MS: 1 ShuffleBytes- 00:15:12.818 [2024-11-05 16:39:17.279789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.818 [2024-11-05 16:39:17.279819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.279875] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.818 [2024-11-05 16:39:17.279888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.279944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.818 [2024-11-05 16:39:17.279961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.818 #46 NEW cov: 12514 ft: 15745 corp: 30/801b lim: 50 exec/s: 46 rss: 74Mb L: 38/47 MS: 1 CopyPart- 00:15:12.818 [2024-11-05 16:39:17.319894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.818 [2024-11-05 16:39:17.319921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.319987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.818 [2024-11-05 16:39:17.320024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.320085] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.818 [2024-11-05 16:39:17.320105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.818 #47 NEW cov: 12514 ft: 15756 corp: 31/836b lim: 50 exec/s: 47 rss: 74Mb L: 35/47 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:15:12.818 [2024-11-05 16:39:17.380471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:12.818 [2024-11-05 16:39:17.380502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.380557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:12.818 [2024-11-05 16:39:17.380573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.380638] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:15:12.818 [2024-11-05 16:39:17.380654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.380710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:15:12.818 [2024-11-05 16:39:17.380732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:12.818 [2024-11-05 16:39:17.380789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:4 nsid:0 00:15:12.818 [2024-11-05 16:39:17.380806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:13.078 #48 NEW cov: 12514 ft: 15801 corp: 32/886b lim: 50 exec/s: 48 rss: 75Mb L: 50/50 MS: 1 CopyPart- 00:15:13.078 [2024-11-05 16:39:17.440095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:15:13.078 [2024-11-05 16:39:17.440123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:13.078 [2024-11-05 16:39:17.440183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:15:13.078 [2024-11-05 16:39:17.440199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:13.078 #49 NEW cov: 12514 ft: 15806 corp: 33/909b lim: 50 exec/s: 24 rss: 75Mb L: 23/50 MS: 1 EraseBytes- 00:15:13.078 #49 DONE cov: 12514 ft: 15806 corp: 33/909b lim: 50 exec/s: 24 rss: 75Mb 00:15:13.078 ###### Recommended dictionary. ###### 00:15:13.078 "\001\000\000\000\000\000\000<" # Uses: 4 00:15:13.078 "\001\000\000\000\000\000\000\000" # Uses: 0 00:15:13.078 "\000\000\000\000\000\000\000\000" # Uses: 0 00:15:13.078 ###### End of recommended dictionary. ###### 00:15:13.078 Done 49 runs in 2 second(s) 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:13.078 16:39:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:15:13.337 [2024-11-05 16:39:17.665557] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:13.337 [2024-11-05 16:39:17.665645] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3528939 ] 00:15:13.596 [2024-11-05 16:39:18.033992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.596 [2024-11-05 16:39:18.093021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.596 [2024-11-05 16:39:18.157001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.596 [2024-11-05 16:39:18.173231] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:15:13.854 INFO: Running with entropic power schedule (0xFF, 100). 00:15:13.854 INFO: Seed: 1337735695 00:15:13.854 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:13.854 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:13.854 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:15:13.854 INFO: A corpus is not provided, starting from an empty corpus 00:15:13.854 #2 INITED exec/s: 0 rss: 66Mb 00:15:13.854 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:13.854 This may also happen if the target rejected all inputs we tried so far 00:15:13.854 [2024-11-05 16:39:18.219340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:13.854 [2024-11-05 16:39:18.219373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:13.854 [2024-11-05 16:39:18.219435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:13.854 [2024-11-05 16:39:18.219451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:13.854 [2024-11-05 16:39:18.219508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:13.854 [2024-11-05 16:39:18.219524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:13.854 [2024-11-05 16:39:18.219581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:13.854 [2024-11-05 16:39:18.219598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:14.113 NEW_FUNC[1/717]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:15:14.113 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:14.113 #14 NEW cov: 12313 ft: 12290 corp: 2/83b lim: 85 exec/s: 0 rss: 73Mb L: 82/82 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:15:14.113 [2024-11-05 16:39:18.680410] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.113 [2024-11-05 16:39:18.680453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.113 [2024-11-05 16:39:18.680498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.113 [2024-11-05 16:39:18.680512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.113 [2024-11-05 16:39:18.680566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.113 [2024-11-05 16:39:18.680582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.113 [2024-11-05 16:39:18.680640] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:14.113 [2024-11-05 16:39:18.680656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:14.372 #35 NEW cov: 12426 ft: 12800 corp: 3/165b lim: 85 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 ChangeByte- 00:15:14.372 [2024-11-05 16:39:18.740341] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.372 [2024-11-05 16:39:18.740373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.740433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.372 [2024-11-05 16:39:18.740448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.740501] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.372 [2024-11-05 16:39:18.740518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.372 #36 NEW cov: 12432 ft: 13488 corp: 4/229b lim: 85 exec/s: 0 rss: 73Mb L: 64/82 MS: 1 CrossOver- 00:15:14.372 [2024-11-05 16:39:18.780382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.372 [2024-11-05 16:39:18.780412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.780474] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.372 [2024-11-05 16:39:18.780490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.780543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.372 [2024-11-05 16:39:18.780560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.372 #37 NEW cov: 12517 ft: 13691 corp: 5/293b lim: 85 exec/s: 0 rss: 73Mb L: 64/82 MS: 1 ChangeBit- 00:15:14.372 [2024-11-05 16:39:18.840726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.372 [2024-11-05 16:39:18.840753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.840806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.372 [2024-11-05 16:39:18.840822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.840874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.372 [2024-11-05 16:39:18.840890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.840946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:14.372 [2024-11-05 16:39:18.840962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:14.372 #38 NEW cov: 12517 ft: 13756 corp: 6/375b lim: 85 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 ChangeBinInt- 00:15:14.372 [2024-11-05 16:39:18.900890] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.372 [2024-11-05 16:39:18.900918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.900965] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.372 [2024-11-05 16:39:18.900981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.901025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.372 [2024-11-05 16:39:18.901041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.372 [2024-11-05 16:39:18.901096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:14.372 [2024-11-05 16:39:18.901113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:14.372 #39 NEW cov: 12517 ft: 13837 corp: 7/457b lim: 85 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 CopyPart- 00:15:14.631 [2024-11-05 16:39:18.960729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.631 [2024-11-05 16:39:18.960758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.631 [2024-11-05 16:39:18.960819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.631 [2024-11-05 16:39:18.960840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.631 #40 NEW cov: 12517 ft: 14284 corp: 8/502b lim: 85 exec/s: 0 rss: 73Mb L: 45/82 MS: 1 EraseBytes- 00:15:14.631 [2024-11-05 16:39:19.000651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.631 [2024-11-05 16:39:19.000681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.631 #41 NEW cov: 12517 ft: 15099 corp: 9/535b lim: 85 exec/s: 0 rss: 73Mb L: 33/82 MS: 1 InsertRepeatedBytes- 00:15:14.631 [2024-11-05 16:39:19.040842] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.631 [2024-11-05 16:39:19.040871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.631 #45 NEW cov: 12517 ft: 15201 corp: 10/555b lim: 85 exec/s: 0 rss: 73Mb L: 20/82 MS: 4 CopyPart-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:15:14.631 [2024-11-05 16:39:19.080946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.631 [2024-11-05 16:39:19.080974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.631 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:14.631 #47 NEW cov: 12540 ft: 15268 corp: 11/588b lim: 85 exec/s: 0 rss: 73Mb L: 33/82 MS: 2 EraseBytes-InsertRepeatedBytes- 00:15:14.631 [2024-11-05 16:39:19.141420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.631 [2024-11-05 16:39:19.141448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.631 [2024-11-05 16:39:19.141505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.631 [2024-11-05 16:39:19.141518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.631 [2024-11-05 16:39:19.141576] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.631 [2024-11-05 16:39:19.141592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.631 #48 NEW cov: 12540 ft: 15344 corp: 12/652b lim: 85 exec/s: 0 rss: 73Mb L: 64/82 MS: 1 CrossOver- 00:15:14.631 [2024-11-05 16:39:19.181186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.631 [2024-11-05 16:39:19.181211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.631 #49 NEW cov: 12540 ft: 15363 corp: 13/672b lim: 85 exec/s: 0 rss: 73Mb L: 20/82 MS: 1 CopyPart- 00:15:14.890 [2024-11-05 16:39:19.221798] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.890 [2024-11-05 16:39:19.221826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.221888] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.890 [2024-11-05 16:39:19.221910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.221964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.890 [2024-11-05 16:39:19.221980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.222032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:14.890 [2024-11-05 16:39:19.222049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:14.890 #50 NEW cov: 12540 ft: 15374 corp: 14/754b lim: 85 exec/s: 50 rss: 74Mb L: 82/82 MS: 1 ChangeBit- 00:15:14.890 [2024-11-05 16:39:19.281979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.890 [2024-11-05 16:39:19.282009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.282061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.890 [2024-11-05 16:39:19.282075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.282128] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:14.890 [2024-11-05 16:39:19.282145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.282199] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:14.890 [2024-11-05 16:39:19.282215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:14.890 #51 NEW cov: 12540 ft: 15430 corp: 15/836b lim: 85 exec/s: 51 rss: 74Mb L: 82/82 MS: 1 ChangeByte- 00:15:14.890 [2024-11-05 16:39:19.321573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.890 [2024-11-05 16:39:19.321599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.890 #57 NEW cov: 12540 ft: 15521 corp: 16/855b lim: 85 exec/s: 57 rss: 74Mb L: 19/82 MS: 1 CrossOver- 00:15:14.890 [2024-11-05 16:39:19.381929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.890 [2024-11-05 16:39:19.381957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:14.890 [2024-11-05 16:39:19.382013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:14.890 [2024-11-05 16:39:19.382028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:14.890 #58 NEW cov: 12540 ft: 15559 corp: 17/893b lim: 85 exec/s: 58 rss: 74Mb L: 38/82 MS: 1 CrossOver- 00:15:14.890 [2024-11-05 16:39:19.441932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:14.890 [2024-11-05 16:39:19.441960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.149 #59 NEW cov: 12540 ft: 15608 corp: 18/926b lim: 85 exec/s: 59 rss: 74Mb L: 33/82 MS: 1 CrossOver- 00:15:15.149 [2024-11-05 16:39:19.502070] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.149 [2024-11-05 16:39:19.502097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.149 #60 NEW cov: 12540 ft: 15657 corp: 19/947b lim: 85 exec/s: 60 rss: 74Mb L: 21/82 MS: 1 InsertByte- 00:15:15.149 [2024-11-05 16:39:19.542394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.149 [2024-11-05 16:39:19.542421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.542477] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.149 [2024-11-05 16:39:19.542491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.149 #61 NEW cov: 12540 ft: 15664 corp: 20/981b lim: 85 exec/s: 61 rss: 74Mb L: 34/82 MS: 1 InsertByte- 00:15:15.149 [2024-11-05 16:39:19.582803] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.149 [2024-11-05 16:39:19.582830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.582876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.149 [2024-11-05 16:39:19.582894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.582942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.149 [2024-11-05 16:39:19.582958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.583010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:15.149 [2024-11-05 16:39:19.583027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:15.149 #62 NEW cov: 12540 ft: 15680 corp: 21/1063b lim: 85 exec/s: 62 rss: 74Mb L: 82/82 MS: 1 ChangeBinInt- 00:15:15.149 [2024-11-05 16:39:19.642486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.149 [2024-11-05 16:39:19.642512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.149 #63 NEW cov: 12540 ft: 15687 corp: 22/1087b lim: 85 exec/s: 63 rss: 74Mb L: 24/82 MS: 1 CopyPart- 00:15:15.149 [2024-11-05 16:39:19.703125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.149 [2024-11-05 16:39:19.703154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.703203] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.149 [2024-11-05 16:39:19.703219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.703272] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.149 [2024-11-05 16:39:19.703287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.149 [2024-11-05 16:39:19.703338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:15.149 [2024-11-05 16:39:19.703353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:15.149 #64 NEW cov: 12540 ft: 15703 corp: 23/1170b lim: 85 exec/s: 64 rss: 74Mb L: 83/83 MS: 1 CopyPart- 00:15:15.408 [2024-11-05 16:39:19.743087] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.408 [2024-11-05 16:39:19.743113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.743169] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.408 [2024-11-05 16:39:19.743183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.743236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.408 [2024-11-05 16:39:19.743252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.408 #65 NEW cov: 12540 ft: 15707 corp: 24/1234b lim: 85 exec/s: 65 rss: 74Mb L: 64/83 MS: 1 ChangeBinInt- 00:15:15.408 [2024-11-05 16:39:19.803237] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.408 [2024-11-05 16:39:19.803263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.803319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.408 [2024-11-05 16:39:19.803332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.803389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.408 [2024-11-05 16:39:19.803405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.408 #66 NEW cov: 12540 ft: 15721 corp: 25/1298b lim: 85 exec/s: 66 rss: 74Mb L: 64/83 MS: 1 ChangeByte- 00:15:15.408 [2024-11-05 16:39:19.843208] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.408 [2024-11-05 16:39:19.843235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.843293] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.408 [2024-11-05 16:39:19.843310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.408 #67 NEW cov: 12540 ft: 15727 corp: 26/1343b lim: 85 exec/s: 67 rss: 74Mb L: 45/83 MS: 1 ShuffleBytes- 00:15:15.408 [2024-11-05 16:39:19.903570] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.408 [2024-11-05 16:39:19.903596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.903650] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.408 [2024-11-05 16:39:19.903663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.903721] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.408 [2024-11-05 16:39:19.903737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.943624] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.408 [2024-11-05 16:39:19.943650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.943704] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.408 [2024-11-05 16:39:19.943724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.943795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.408 [2024-11-05 16:39:19.943811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.408 #69 NEW cov: 12540 ft: 15746 corp: 27/1398b lim: 85 exec/s: 69 rss: 74Mb L: 55/83 MS: 2 CopyPart-InsertRepeatedBytes- 00:15:15.408 [2024-11-05 16:39:19.983926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.408 [2024-11-05 16:39:19.983970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.984023] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.408 [2024-11-05 16:39:19.984036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.984089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.408 [2024-11-05 16:39:19.984106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.408 [2024-11-05 16:39:19.984160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:15.408 [2024-11-05 16:39:19.984179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:15.668 #70 NEW cov: 12540 ft: 15773 corp: 28/1481b lim: 85 exec/s: 70 rss: 74Mb L: 83/83 MS: 1 CopyPart- 00:15:15.668 [2024-11-05 16:39:20.024067] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.668 [2024-11-05 16:39:20.024096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.024147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.668 [2024-11-05 16:39:20.024163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.024218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.668 [2024-11-05 16:39:20.024235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.024289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:15.668 [2024-11-05 16:39:20.024305] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:15.668 #71 NEW cov: 12540 ft: 15808 corp: 29/1563b lim: 85 exec/s: 71 rss: 74Mb L: 82/83 MS: 1 InsertRepeatedBytes- 00:15:15.668 [2024-11-05 16:39:20.083762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.668 [2024-11-05 16:39:20.083792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.668 #72 NEW cov: 12540 ft: 15840 corp: 30/1591b lim: 85 exec/s: 72 rss: 74Mb L: 28/83 MS: 1 EraseBytes- 00:15:15.668 [2024-11-05 16:39:20.144440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.668 [2024-11-05 16:39:20.144471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.144525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.668 [2024-11-05 16:39:20.144539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.144595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.668 [2024-11-05 16:39:20.144611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.144669] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:15.668 [2024-11-05 16:39:20.144686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:15.668 #73 NEW cov: 12540 ft: 15854 corp: 31/1674b lim: 85 exec/s: 73 rss: 74Mb L: 83/83 MS: 1 InsertByte- 00:15:15.668 [2024-11-05 16:39:20.204594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:15:15.668 [2024-11-05 16:39:20.204624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.204693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:15:15.668 [2024-11-05 16:39:20.204733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.204787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:15:15.668 [2024-11-05 16:39:20.204802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:15.668 [2024-11-05 16:39:20.204860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:15:15.668 [2024-11-05 16:39:20.204876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:15.668 #74 NEW cov: 12540 ft: 15865 corp: 32/1748b lim: 85 exec/s: 37 rss: 74Mb L: 74/83 MS: 1 CopyPart- 00:15:15.668 #74 DONE cov: 12540 ft: 15865 corp: 32/1748b lim: 85 exec/s: 37 rss: 74Mb 00:15:15.668 Done 74 runs in 2 second(s) 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:15.927 16:39:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:15:15.927 [2024-11-05 16:39:20.406054] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:15.927 [2024-11-05 16:39:20.406140] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529309 ] 00:15:16.494 [2024-11-05 16:39:20.782960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.494 [2024-11-05 16:39:20.841042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.494 [2024-11-05 16:39:20.905041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:16.494 [2024-11-05 16:39:20.921285] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:15:16.494 INFO: Running with entropic power schedule (0xFF, 100). 00:15:16.494 INFO: Seed: 4085740070 00:15:16.494 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:16.494 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:16.494 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:15:16.494 INFO: A corpus is not provided, starting from an empty corpus 00:15:16.494 #2 INITED exec/s: 0 rss: 66Mb 00:15:16.494 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:16.494 This may also happen if the target rejected all inputs we tried so far 00:15:16.494 [2024-11-05 16:39:20.966841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:16.494 [2024-11-05 16:39:20.966875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:16.494 [2024-11-05 16:39:20.966920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:16.494 [2024-11-05 16:39:20.966935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:16.753 NEW_FUNC[1/716]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:15:16.753 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:16.753 #5 NEW cov: 12240 ft: 12243 corp: 2/13b lim: 25 exec/s: 0 rss: 73Mb L: 12/12 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:15:16.753 [2024-11-05 16:39:21.287917] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:16.753 [2024-11-05 16:39:21.287958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:16.753 [2024-11-05 16:39:21.288011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:16.753 [2024-11-05 16:39:21.288027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:16.753 [2024-11-05 16:39:21.288083] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:16.753 [2024-11-05 16:39:21.288099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:16.753 [2024-11-05 16:39:21.288153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:16.753 [2024-11-05 16:39:21.288169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:16.753 #12 NEW cov: 12359 ft: 13324 corp: 3/34b lim: 25 exec/s: 0 rss: 73Mb L: 21/21 MS: 2 InsertByte-InsertRepeatedBytes- 00:15:16.753 [2024-11-05 16:39:21.327945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:16.753 [2024-11-05 16:39:21.327975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:16.753 [2024-11-05 16:39:21.328029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:16.753 [2024-11-05 16:39:21.328043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:16.753 [2024-11-05 16:39:21.328097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:16.753 [2024-11-05 16:39:21.328113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:16.753 [2024-11-05 16:39:21.328170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:16.753 [2024-11-05 16:39:21.328187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.011 #13 NEW cov: 12365 ft: 13506 corp: 4/55b lim: 25 exec/s: 0 rss: 74Mb L: 21/21 MS: 1 ChangeBinInt- 00:15:17.011 [2024-11-05 16:39:21.387920] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.011 [2024-11-05 16:39:21.387950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.388011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.011 [2024-11-05 16:39:21.388026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.388084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.011 [2024-11-05 16:39:21.388101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.011 #15 NEW cov: 12450 ft: 14026 corp: 5/73b lim: 25 exec/s: 0 rss: 74Mb L: 18/21 MS: 2 ChangeByte-InsertRepeatedBytes- 00:15:17.011 [2024-11-05 16:39:21.428173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.011 [2024-11-05 16:39:21.428204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.428260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.011 [2024-11-05 16:39:21.428274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.428328] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.011 [2024-11-05 16:39:21.428344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.428399] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.011 [2024-11-05 16:39:21.428415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.011 #16 NEW cov: 12450 ft: 14078 corp: 6/97b lim: 25 exec/s: 0 rss: 74Mb L: 24/24 MS: 1 CopyPart- 00:15:17.011 [2024-11-05 16:39:21.488061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.011 [2024-11-05 16:39:21.488090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.488145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.011 [2024-11-05 16:39:21.488160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.011 #17 NEW cov: 12450 ft: 14171 corp: 7/109b lim: 25 exec/s: 0 rss: 74Mb L: 12/24 MS: 1 ShuffleBytes- 00:15:17.011 [2024-11-05 16:39:21.548260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.011 [2024-11-05 16:39:21.548287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.011 [2024-11-05 16:39:21.548352] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.011 [2024-11-05 16:39:21.548374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.011 #18 NEW cov: 12450 ft: 14225 corp: 8/121b lim: 25 exec/s: 0 rss: 74Mb L: 12/24 MS: 1 CopyPart- 00:15:17.269 [2024-11-05 16:39:21.608403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.269 [2024-11-05 16:39:21.608429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.608486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.269 [2024-11-05 16:39:21.608502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.269 #19 NEW cov: 12450 ft: 14292 corp: 9/133b lim: 25 exec/s: 0 rss: 74Mb L: 12/24 MS: 1 ChangeBinInt- 00:15:17.269 [2024-11-05 16:39:21.668849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.269 [2024-11-05 16:39:21.668878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.668931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.269 [2024-11-05 16:39:21.668946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.669001] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.269 [2024-11-05 16:39:21.669018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.669073] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.269 [2024-11-05 16:39:21.669089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.269 #20 NEW cov: 12450 ft: 14329 corp: 10/154b lim: 25 exec/s: 0 rss: 74Mb L: 21/24 MS: 1 CrossOver- 00:15:17.269 [2024-11-05 16:39:21.729148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.269 [2024-11-05 16:39:21.729176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.729228] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.269 [2024-11-05 16:39:21.729246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.729292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.269 [2024-11-05 16:39:21.729309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.729365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.269 [2024-11-05 16:39:21.729382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.269 [2024-11-05 16:39:21.729438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:15:17.269 [2024-11-05 16:39:21.729455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:17.269 #21 NEW cov: 12450 ft: 14399 corp: 11/179b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:15:17.269 [2024-11-05 16:39:21.788792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.269 [2024-11-05 16:39:21.788821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.270 #22 NEW cov: 12450 ft: 14778 corp: 12/186b lim: 25 exec/s: 0 rss: 74Mb L: 7/25 MS: 1 EraseBytes- 00:15:17.270 [2024-11-05 16:39:21.828889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.270 [2024-11-05 16:39:21.828917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.528 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:17.528 #23 NEW cov: 12473 ft: 14823 corp: 13/193b lim: 25 exec/s: 0 rss: 74Mb L: 7/25 MS: 1 CopyPart- 00:15:17.528 [2024-11-05 16:39:21.889421] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.528 [2024-11-05 16:39:21.889449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:21.889504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.528 [2024-11-05 16:39:21.889521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:21.889574] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.528 [2024-11-05 16:39:21.889590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:21.889644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.528 [2024-11-05 16:39:21.889659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.528 #24 NEW cov: 12473 ft: 14893 corp: 14/214b lim: 25 exec/s: 0 rss: 74Mb L: 21/25 MS: 1 ShuffleBytes- 00:15:17.528 [2024-11-05 16:39:21.929331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.528 [2024-11-05 16:39:21.929359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:21.929414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.528 [2024-11-05 16:39:21.929428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.528 #25 NEW cov: 12473 ft: 14942 corp: 15/226b lim: 25 exec/s: 0 rss: 74Mb L: 12/25 MS: 1 CrossOver- 00:15:17.528 [2024-11-05 16:39:21.969314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.528 [2024-11-05 16:39:21.969342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.528 #26 NEW cov: 12473 ft: 14948 corp: 16/235b lim: 25 exec/s: 26 rss: 74Mb L: 9/25 MS: 1 EraseBytes- 00:15:17.528 [2024-11-05 16:39:22.029857] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.528 [2024-11-05 16:39:22.029885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:22.029938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.528 [2024-11-05 16:39:22.029954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:22.030007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.528 [2024-11-05 16:39:22.030023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:22.030078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.528 [2024-11-05 16:39:22.030094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.528 #27 NEW cov: 12473 ft: 14993 corp: 17/256b lim: 25 exec/s: 27 rss: 74Mb L: 21/25 MS: 1 EraseBytes- 00:15:17.528 [2024-11-05 16:39:22.089933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.528 [2024-11-05 16:39:22.089960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.528 [2024-11-05 16:39:22.090024] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.529 [2024-11-05 16:39:22.090061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.529 [2024-11-05 16:39:22.090120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.529 [2024-11-05 16:39:22.090140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.787 #28 NEW cov: 12473 ft: 15054 corp: 18/274b lim: 25 exec/s: 28 rss: 74Mb L: 18/25 MS: 1 ChangeBit- 00:15:17.787 [2024-11-05 16:39:22.130168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.787 [2024-11-05 16:39:22.130195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.130246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.787 [2024-11-05 16:39:22.130263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.130315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.787 [2024-11-05 16:39:22.130331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.130384] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.787 [2024-11-05 16:39:22.130400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.787 #29 NEW cov: 12473 ft: 15084 corp: 19/295b lim: 25 exec/s: 29 rss: 75Mb L: 21/25 MS: 1 ChangeBinInt- 00:15:17.787 [2024-11-05 16:39:22.190182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.787 [2024-11-05 16:39:22.190208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.190261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.787 [2024-11-05 16:39:22.190276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.190330] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.787 [2024-11-05 16:39:22.190346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.787 #30 NEW cov: 12473 ft: 15178 corp: 20/313b lim: 25 exec/s: 30 rss: 75Mb L: 18/25 MS: 1 CrossOver- 00:15:17.787 [2024-11-05 16:39:22.230438] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.787 [2024-11-05 16:39:22.230466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.230520] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.787 [2024-11-05 16:39:22.230534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.230587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.787 [2024-11-05 16:39:22.230602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.230658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.787 [2024-11-05 16:39:22.230674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.787 #31 NEW cov: 12473 ft: 15200 corp: 21/334b lim: 25 exec/s: 31 rss: 75Mb L: 21/25 MS: 1 ChangeBinInt- 00:15:17.787 [2024-11-05 16:39:22.270695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.787 [2024-11-05 16:39:22.270727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.270795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.787 [2024-11-05 16:39:22.270814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.270867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.787 [2024-11-05 16:39:22.270882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.270938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.787 [2024-11-05 16:39:22.270954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.271011] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:15:17.787 [2024-11-05 16:39:22.271027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:17.787 #32 NEW cov: 12473 ft: 15213 corp: 22/359b lim: 25 exec/s: 32 rss: 75Mb L: 25/25 MS: 1 CrossOver- 00:15:17.787 [2024-11-05 16:39:22.310420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.787 [2024-11-05 16:39:22.310446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.310509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.787 [2024-11-05 16:39:22.310525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.787 #33 NEW cov: 12473 ft: 15229 corp: 23/372b lim: 25 exec/s: 33 rss: 75Mb L: 13/25 MS: 1 InsertByte- 00:15:17.787 [2024-11-05 16:39:22.350807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:17.787 [2024-11-05 16:39:22.350835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.350901] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:17.787 [2024-11-05 16:39:22.350935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.350992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:17.787 [2024-11-05 16:39:22.351009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:17.787 [2024-11-05 16:39:22.351068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:17.787 [2024-11-05 16:39:22.351085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.410975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.046 [2024-11-05 16:39:22.411004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.411052] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.046 [2024-11-05 16:39:22.411068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.411119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.046 [2024-11-05 16:39:22.411136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.411195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.046 [2024-11-05 16:39:22.411211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.046 #35 NEW cov: 12473 ft: 15241 corp: 24/396b lim: 25 exec/s: 35 rss: 75Mb L: 24/25 MS: 2 CrossOver-InsertRepeatedBytes- 00:15:18.046 [2024-11-05 16:39:22.451170] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.046 [2024-11-05 16:39:22.451197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.451246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.046 [2024-11-05 16:39:22.451263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.451304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.046 [2024-11-05 16:39:22.451320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.451377] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.046 [2024-11-05 16:39:22.451393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.451448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:15:18.046 [2024-11-05 16:39:22.451464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:18.046 #36 NEW cov: 12473 ft: 15273 corp: 25/421b lim: 25 exec/s: 36 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:15:18.046 [2024-11-05 16:39:22.491211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.046 [2024-11-05 16:39:22.491237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.491288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.046 [2024-11-05 16:39:22.491304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.491353] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.046 [2024-11-05 16:39:22.491368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.491424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.046 [2024-11-05 16:39:22.491440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.046 #37 NEW cov: 12473 ft: 15298 corp: 26/442b lim: 25 exec/s: 37 rss: 75Mb L: 21/25 MS: 1 ChangeBit- 00:15:18.046 [2024-11-05 16:39:22.551508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.046 [2024-11-05 16:39:22.551535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.551587] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.046 [2024-11-05 16:39:22.551603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.551639] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.046 [2024-11-05 16:39:22.551655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.551707] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.046 [2024-11-05 16:39:22.551729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.046 [2024-11-05 16:39:22.551785] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:15:18.046 [2024-11-05 16:39:22.551800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:18.047 #38 NEW cov: 12473 ft: 15308 corp: 27/467b lim: 25 exec/s: 38 rss: 75Mb L: 25/25 MS: 1 ShuffleBytes- 00:15:18.047 [2024-11-05 16:39:22.611540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.047 [2024-11-05 16:39:22.611567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.047 [2024-11-05 16:39:22.611632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.047 [2024-11-05 16:39:22.611669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.047 [2024-11-05 16:39:22.611734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.047 [2024-11-05 16:39:22.611754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.047 [2024-11-05 16:39:22.611815] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.047 [2024-11-05 16:39:22.611834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.305 #39 NEW cov: 12473 ft: 15343 corp: 28/488b lim: 25 exec/s: 39 rss: 75Mb L: 21/25 MS: 1 ChangeBit- 00:15:18.305 [2024-11-05 16:39:22.651378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.305 [2024-11-05 16:39:22.651406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.651467] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.305 [2024-11-05 16:39:22.651483] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.305 #40 NEW cov: 12473 ft: 15348 corp: 29/501b lim: 25 exec/s: 40 rss: 75Mb L: 13/25 MS: 1 ShuffleBytes- 00:15:18.305 [2024-11-05 16:39:22.691895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.305 [2024-11-05 16:39:22.691922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.691974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.305 [2024-11-05 16:39:22.691991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.692019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.305 [2024-11-05 16:39:22.692036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.692090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.305 [2024-11-05 16:39:22.692107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.692163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:15:18.305 [2024-11-05 16:39:22.692180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:18.305 #41 NEW cov: 12473 ft: 15352 corp: 30/526b lim: 25 exec/s: 41 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:15:18.305 [2024-11-05 16:39:22.731569] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.305 [2024-11-05 16:39:22.731595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.305 #44 NEW cov: 12473 ft: 15366 corp: 31/533b lim: 25 exec/s: 44 rss: 75Mb L: 7/25 MS: 3 ShuffleBytes-CopyPart-InsertRepeatedBytes- 00:15:18.305 [2024-11-05 16:39:22.772042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.305 [2024-11-05 16:39:22.772071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.772123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.305 [2024-11-05 16:39:22.772139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.772193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.305 [2024-11-05 16:39:22.772209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.305 [2024-11-05 16:39:22.772265] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.305 [2024-11-05 16:39:22.772281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.305 #45 NEW cov: 12473 ft: 15369 corp: 32/557b lim: 25 exec/s: 45 rss: 75Mb L: 24/25 MS: 1 ChangeByte- 00:15:18.305 [2024-11-05 16:39:22.812171] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.305 [2024-11-05 16:39:22.812201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.812250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.306 [2024-11-05 16:39:22.812266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.812320] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.306 [2024-11-05 16:39:22.812336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.812390] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.306 [2024-11-05 16:39:22.812405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.306 #46 NEW cov: 12473 ft: 15425 corp: 33/578b lim: 25 exec/s: 46 rss: 75Mb L: 21/25 MS: 1 ShuffleBytes- 00:15:18.306 [2024-11-05 16:39:22.872424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.306 [2024-11-05 16:39:22.872451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.872515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.306 [2024-11-05 16:39:22.872534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.872607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.306 [2024-11-05 16:39:22.872626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.872685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.306 [2024-11-05 16:39:22.872704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.306 [2024-11-05 16:39:22.872771] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:15:18.306 [2024-11-05 16:39:22.872791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:15:18.565 #47 NEW cov: 12473 ft: 15439 corp: 34/603b lim: 25 exec/s: 47 rss: 75Mb L: 25/25 MS: 1 CMP- DE: "\001:\237F\327\004\270\202"- 00:15:18.565 [2024-11-05 16:39:22.912433] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.565 [2024-11-05 16:39:22.912463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.565 [2024-11-05 16:39:22.912514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:15:18.565 [2024-11-05 16:39:22.912530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:18.565 [2024-11-05 16:39:22.912580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:15:18.565 [2024-11-05 16:39:22.912596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:18.565 [2024-11-05 16:39:22.912654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:15:18.565 [2024-11-05 16:39:22.912671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:18.565 #54 NEW cov: 12473 ft: 15444 corp: 35/625b lim: 25 exec/s: 54 rss: 75Mb L: 22/25 MS: 2 ShuffleBytes-CrossOver- 00:15:18.565 [2024-11-05 16:39:22.952150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:15:18.565 [2024-11-05 16:39:22.952177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:18.565 #55 NEW cov: 12473 ft: 15449 corp: 36/632b lim: 25 exec/s: 27 rss: 75Mb L: 7/25 MS: 1 ChangeBinInt- 00:15:18.565 #55 DONE cov: 12473 ft: 15449 corp: 36/632b lim: 25 exec/s: 27 rss: 75Mb 00:15:18.565 ###### Recommended dictionary. ###### 00:15:18.565 "\001:\237F\327\004\270\202" # Uses: 0 00:15:18.565 ###### End of recommended dictionary. ###### 00:15:18.565 Done 55 runs in 2 second(s) 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:15:18.565 16:39:23 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:15:18.824 [2024-11-05 16:39:23.155001] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:18.824 [2024-11-05 16:39:23.155097] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529679 ] 00:15:19.082 [2024-11-05 16:39:23.535412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.082 [2024-11-05 16:39:23.593056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.082 [2024-11-05 16:39:23.656940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.340 [2024-11-05 16:39:23.673202] tcp.c:1082:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:15:19.340 INFO: Running with entropic power schedule (0xFF, 100). 00:15:19.340 INFO: Seed: 2543776921 00:15:19.340 INFO: Loaded 1 modules (387411 inline 8-bit counters): 387411 [0x2c3aa4c, 0x2c9939f), 00:15:19.340 INFO: Loaded 1 PC tables (387411 PCs): 387411 [0x2c993a0,0x32828d0), 00:15:19.340 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:15:19.341 INFO: A corpus is not provided, starting from an empty corpus 00:15:19.341 #2 INITED exec/s: 0 rss: 66Mb 00:15:19.341 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:19.341 This may also happen if the target rejected all inputs we tried so far 00:15:19.341 [2024-11-05 16:39:23.718932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.341 [2024-11-05 16:39:23.718967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:19.341 [2024-11-05 16:39:23.719011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.341 [2024-11-05 16:39:23.719028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:19.599 NEW_FUNC[1/717]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:15:19.599 NEW_FUNC[2/717]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:15:19.599 #8 NEW cov: 12318 ft: 12315 corp: 2/42b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:15:19.599 [2024-11-05 16:39:24.180094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.599 [2024-11-05 16:39:24.180138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:19.599 [2024-11-05 16:39:24.180185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.599 [2024-11-05 16:39:24.180202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:19.858 #9 NEW cov: 12431 ft: 12837 corp: 3/83b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 ChangeBinInt- 00:15:19.858 [2024-11-05 16:39:24.240005] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.858 [2024-11-05 16:39:24.240040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:19.858 #10 NEW cov: 12437 ft: 13944 corp: 4/114b lim: 100 exec/s: 0 rss: 73Mb L: 31/41 MS: 1 EraseBytes- 00:15:19.858 [2024-11-05 16:39:24.300276] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.858 [2024-11-05 16:39:24.300307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:19.858 [2024-11-05 16:39:24.300355] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.858 [2024-11-05 16:39:24.300371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:19.858 #11 NEW cov: 12522 ft: 14173 corp: 5/155b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 ChangeBit- 00:15:19.858 [2024-11-05 16:39:24.340414] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.858 [2024-11-05 16:39:24.340444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:19.858 [2024-11-05 16:39:24.340502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.858 [2024-11-05 16:39:24.340518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:19.858 #12 NEW cov: 12522 ft: 14300 corp: 6/196b lim: 100 exec/s: 0 rss: 73Mb L: 41/41 MS: 1 ShuffleBytes- 00:15:19.858 [2024-11-05 16:39:24.400418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:19.858 [2024-11-05 16:39:24.400447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.116 #13 NEW cov: 12522 ft: 14394 corp: 7/220b lim: 100 exec/s: 0 rss: 73Mb L: 24/41 MS: 1 EraseBytes- 00:15:20.116 [2024-11-05 16:39:24.460761] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4281663273 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.116 [2024-11-05 16:39:24.460789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.116 [2024-11-05 16:39:24.460851] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.116 [2024-11-05 16:39:24.460874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.116 #14 NEW cov: 12522 ft: 14434 corp: 8/262b lim: 100 exec/s: 0 rss: 73Mb L: 42/42 MS: 1 InsertByte- 00:15:20.116 [2024-11-05 16:39:24.501270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.116 [2024-11-05 16:39:24.501298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.116 [2024-11-05 16:39:24.501350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:1095216660480 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.116 [2024-11-05 16:39:24.501367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.116 [2024-11-05 16:39:24.501425] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.116 [2024-11-05 16:39:24.501443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:20.116 [2024-11-05 16:39:24.501505] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.116 [2024-11-05 16:39:24.501523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:20.116 #15 NEW cov: 12522 ft: 14882 corp: 9/344b lim: 100 exec/s: 0 rss: 73Mb L: 82/82 MS: 1 CopyPart- 00:15:20.116 [2024-11-05 16:39:24.541028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.541055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.117 [2024-11-05 16:39:24.541115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.541132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.117 #16 NEW cov: 12522 ft: 15003 corp: 10/385b lim: 100 exec/s: 0 rss: 73Mb L: 41/82 MS: 1 ChangeByte- 00:15:20.117 [2024-11-05 16:39:24.581110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.581139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.117 [2024-11-05 16:39:24.581196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709027327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.581211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.117 NEW_FUNC[1/1]: 0x1c30458 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:20.117 #17 NEW cov: 12545 ft: 15032 corp: 11/426b lim: 100 exec/s: 0 rss: 73Mb L: 41/82 MS: 1 ChangeBit- 00:15:20.117 [2024-11-05 16:39:24.621236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.621263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.117 [2024-11-05 16:39:24.621325] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65407 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.621343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.117 #18 NEW cov: 12545 ft: 15039 corp: 12/475b lim: 100 exec/s: 0 rss: 73Mb L: 49/82 MS: 1 CMP- DE: "~\3014>G\237:\000"- 00:15:20.117 [2024-11-05 16:39:24.661142] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.117 [2024-11-05 16:39:24.661169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.375 #19 NEW cov: 12545 ft: 15096 corp: 13/505b lim: 100 exec/s: 0 rss: 74Mb L: 30/82 MS: 1 EraseBytes- 00:15:20.375 [2024-11-05 16:39:24.721520] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.375 [2024-11-05 16:39:24.721546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.375 [2024-11-05 16:39:24.721609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.721647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.376 #20 NEW cov: 12545 ft: 15105 corp: 14/547b lim: 100 exec/s: 20 rss: 74Mb L: 42/82 MS: 1 InsertByte- 00:15:20.376 [2024-11-05 16:39:24.761627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4281663410 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.761654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.376 [2024-11-05 16:39:24.761706] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.761729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.376 #21 NEW cov: 12545 ft: 15175 corp: 15/589b lim: 100 exec/s: 21 rss: 74Mb L: 42/82 MS: 1 ChangeByte- 00:15:20.376 [2024-11-05 16:39:24.821622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.821650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.376 #22 NEW cov: 12545 ft: 15203 corp: 16/619b lim: 100 exec/s: 22 rss: 74Mb L: 30/82 MS: 1 ChangeBit- 00:15:20.376 [2024-11-05 16:39:24.881956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.881984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.376 [2024-11-05 16:39:24.882044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709027327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.882060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.376 #23 NEW cov: 12545 ft: 15220 corp: 17/660b lim: 100 exec/s: 23 rss: 74Mb L: 41/82 MS: 1 CopyPart- 00:15:20.376 [2024-11-05 16:39:24.942288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.942315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.376 [2024-11-05 16:39:24.942372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.942386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.376 [2024-11-05 16:39:24.942444] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.376 [2024-11-05 16:39:24.942461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:20.634 #24 NEW cov: 12545 ft: 15533 corp: 18/723b lim: 100 exec/s: 24 rss: 74Mb L: 63/82 MS: 1 InsertRepeatedBytes- 00:15:20.634 [2024-11-05 16:39:24.982093] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.634 [2024-11-05 16:39:24.982120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.634 #25 NEW cov: 12545 ft: 15568 corp: 19/748b lim: 100 exec/s: 25 rss: 74Mb L: 25/82 MS: 1 InsertByte- 00:15:20.634 [2024-11-05 16:39:25.042434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4281663410 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.634 [2024-11-05 16:39:25.042462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.634 [2024-11-05 16:39:25.042506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.634 [2024-11-05 16:39:25.042523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.634 #26 NEW cov: 12545 ft: 15598 corp: 20/790b lim: 100 exec/s: 26 rss: 74Mb L: 42/82 MS: 1 ChangeBit- 00:15:20.634 [2024-11-05 16:39:25.102421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.634 [2024-11-05 16:39:25.102448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.634 #27 NEW cov: 12545 ft: 15630 corp: 21/820b lim: 100 exec/s: 27 rss: 74Mb L: 30/82 MS: 1 ChangeASCIIInt- 00:15:20.634 [2024-11-05 16:39:25.142748] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.635 [2024-11-05 16:39:25.142774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.635 [2024-11-05 16:39:25.142833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709027327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.635 [2024-11-05 16:39:25.142847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.635 #28 NEW cov: 12545 ft: 15670 corp: 22/861b lim: 100 exec/s: 28 rss: 74Mb L: 41/82 MS: 1 ChangeBit- 00:15:20.635 [2024-11-05 16:39:25.202917] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.635 [2024-11-05 16:39:25.202944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.635 [2024-11-05 16:39:25.203007] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.635 [2024-11-05 16:39:25.203023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.893 #29 NEW cov: 12545 ft: 15699 corp: 23/903b lim: 100 exec/s: 29 rss: 74Mb L: 42/82 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:15:20.893 [2024-11-05 16:39:25.263253] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.893 [2024-11-05 16:39:25.263280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.893 [2024-11-05 16:39:25.263343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.893 [2024-11-05 16:39:25.263382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.893 [2024-11-05 16:39:25.263443] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.893 [2024-11-05 16:39:25.263463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:20.893 #30 NEW cov: 12545 ft: 15720 corp: 24/977b lim: 100 exec/s: 30 rss: 74Mb L: 74/82 MS: 1 CrossOver- 00:15:20.893 [2024-11-05 16:39:25.323034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.893 [2024-11-05 16:39:25.323061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.894 #31 NEW cov: 12545 ft: 15727 corp: 25/1009b lim: 100 exec/s: 31 rss: 74Mb L: 32/82 MS: 1 CrossOver- 00:15:20.894 [2024-11-05 16:39:25.383402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4281663273 len:65281 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.894 [2024-11-05 16:39:25.383428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.894 [2024-11-05 16:39:25.383486] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073675997183 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.894 [2024-11-05 16:39:25.383503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.894 #32 NEW cov: 12545 ft: 15814 corp: 26/1051b lim: 100 exec/s: 32 rss: 74Mb L: 42/82 MS: 1 ChangeByte- 00:15:20.894 [2024-11-05 16:39:25.423887] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.894 [2024-11-05 16:39:25.423915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:20.894 [2024-11-05 16:39:25.423972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.894 [2024-11-05 16:39:25.423987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:20.894 [2024-11-05 16:39:25.424040] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.894 [2024-11-05 16:39:25.424058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:20.894 [2024-11-05 16:39:25.424116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:20.894 [2024-11-05 16:39:25.424133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:15:20.894 #33 NEW cov: 12545 ft: 15816 corp: 27/1133b lim: 100 exec/s: 33 rss: 74Mb L: 82/82 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:15:21.152 [2024-11-05 16:39:25.483703] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.483738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:21.152 [2024-11-05 16:39:25.483785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709027327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.483804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:21.152 #34 NEW cov: 12545 ft: 15817 corp: 28/1182b lim: 100 exec/s: 34 rss: 74Mb L: 49/82 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\000"- 00:15:21.152 [2024-11-05 16:39:25.543874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.543903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:21.152 [2024-11-05 16:39:25.543972] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.543990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:21.152 #35 NEW cov: 12545 ft: 15831 corp: 29/1223b lim: 100 exec/s: 35 rss: 74Mb L: 41/82 MS: 1 CrossOver- 00:15:21.152 [2024-11-05 16:39:25.583794] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.583826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:21.152 #36 NEW cov: 12545 ft: 15866 corp: 30/1253b lim: 100 exec/s: 36 rss: 74Mb L: 30/82 MS: 1 ChangeBit- 00:15:21.152 [2024-11-05 16:39:25.624244] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.624274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:21.152 [2024-11-05 16:39:25.624335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.624350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:21.152 [2024-11-05 16:39:25.624408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744069414649855 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.624424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:15:21.152 #37 NEW cov: 12545 ft: 15899 corp: 31/1327b lim: 100 exec/s: 37 rss: 74Mb L: 74/82 MS: 1 ShuffleBytes- 00:15:21.152 [2024-11-05 16:39:25.664274] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.664302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:21.152 [2024-11-05 16:39:25.664365] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.664381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:21.152 #38 NEW cov: 12545 ft: 15913 corp: 32/1368b lim: 100 exec/s: 38 rss: 74Mb L: 41/82 MS: 1 ChangeByte- 00:15:21.152 [2024-11-05 16:39:25.704376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4294912256 len:256 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.704404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:15:21.152 [2024-11-05 16:39:25.704451] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709027327 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:21.152 [2024-11-05 16:39:25.704468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:15:21.152 #39 NEW cov: 12545 ft: 15948 corp: 33/1409b lim: 100 exec/s: 19 rss: 74Mb L: 41/82 MS: 1 ChangeByte- 00:15:21.152 #39 DONE cov: 12545 ft: 15948 corp: 33/1409b lim: 100 exec/s: 19 rss: 74Mb 00:15:21.152 ###### Recommended dictionary. ###### 00:15:21.152 "~\3014>G\237:\000" # Uses: 0 00:15:21.152 "\000\000\000\000\000\000\000\000" # Uses: 2 00:15:21.152 ###### End of recommended dictionary. ###### 00:15:21.152 Done 39 runs in 2 second(s) 00:15:21.411 16:39:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:15:21.411 16:39:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:21.411 16:39:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:21.411 16:39:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:15:21.411 00:15:21.411 real 1m6.999s 00:15:21.411 user 1m39.579s 00:15:21.411 sys 0m9.282s 00:15:21.411 16:39:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:21.412 16:39:25 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.412 ************************************ 00:15:21.412 END TEST nvmf_llvm_fuzz 00:15:21.412 ************************************ 00:15:21.412 16:39:25 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:15:21.412 16:39:25 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:15:21.412 16:39:25 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:15:21.412 16:39:25 llvm_fuzz -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:21.412 16:39:25 llvm_fuzz -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:21.412 16:39:25 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:21.412 ************************************ 00:15:21.412 START TEST vfio_llvm_fuzz 00:15:21.412 ************************************ 00:15:21.412 16:39:25 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:15:21.672 * Looking for test storage... 00:15:21.673 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:21.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.673 --rc genhtml_branch_coverage=1 00:15:21.673 --rc genhtml_function_coverage=1 00:15:21.673 --rc genhtml_legend=1 00:15:21.673 --rc geninfo_all_blocks=1 00:15:21.673 --rc geninfo_unexecuted_blocks=1 00:15:21.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.673 ' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:21.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.673 --rc genhtml_branch_coverage=1 00:15:21.673 --rc genhtml_function_coverage=1 00:15:21.673 --rc genhtml_legend=1 00:15:21.673 --rc geninfo_all_blocks=1 00:15:21.673 --rc geninfo_unexecuted_blocks=1 00:15:21.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.673 ' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:21.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.673 --rc genhtml_branch_coverage=1 00:15:21.673 --rc genhtml_function_coverage=1 00:15:21.673 --rc genhtml_legend=1 00:15:21.673 --rc geninfo_all_blocks=1 00:15:21.673 --rc geninfo_unexecuted_blocks=1 00:15:21.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.673 ' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:21.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.673 --rc genhtml_branch_coverage=1 00:15:21.673 --rc genhtml_function_coverage=1 00:15:21.673 --rc genhtml_legend=1 00:15:21.673 --rc geninfo_all_blocks=1 00:15:21.673 --rc geninfo_unexecuted_blocks=1 00:15:21.673 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.673 ' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FUZZER=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:21.673 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_SHARED=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:21.674 #define SPDK_CONFIG_H 00:15:21.674 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:21.674 #define SPDK_CONFIG_APPS 1 00:15:21.674 #define SPDK_CONFIG_ARCH native 00:15:21.674 #undef SPDK_CONFIG_ASAN 00:15:21.674 #undef SPDK_CONFIG_AVAHI 00:15:21.674 #undef SPDK_CONFIG_CET 00:15:21.674 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:21.674 #define SPDK_CONFIG_COVERAGE 1 00:15:21.674 #define SPDK_CONFIG_CROSS_PREFIX 00:15:21.674 #undef SPDK_CONFIG_CRYPTO 00:15:21.674 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:21.674 #undef SPDK_CONFIG_CUSTOMOCF 00:15:21.674 #undef SPDK_CONFIG_DAOS 00:15:21.674 #define SPDK_CONFIG_DAOS_DIR 00:15:21.674 #define SPDK_CONFIG_DEBUG 1 00:15:21.674 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:21.674 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:15:21.674 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:21.674 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:21.674 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:21.674 #undef SPDK_CONFIG_DPDK_UADK 00:15:21.674 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:15:21.674 #define SPDK_CONFIG_EXAMPLES 1 00:15:21.674 #undef SPDK_CONFIG_FC 00:15:21.674 #define SPDK_CONFIG_FC_PATH 00:15:21.674 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:21.674 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:21.674 #define SPDK_CONFIG_FSDEV 1 00:15:21.674 #undef SPDK_CONFIG_FUSE 00:15:21.674 #define SPDK_CONFIG_FUZZER 1 00:15:21.674 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:15:21.674 #undef SPDK_CONFIG_GOLANG 00:15:21.674 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:21.674 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:21.674 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:21.674 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:21.674 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:21.674 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:21.674 #undef SPDK_CONFIG_HAVE_LZ4 00:15:21.674 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:21.674 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:21.674 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:21.674 #define SPDK_CONFIG_IDXD 1 00:15:21.674 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:21.674 #undef SPDK_CONFIG_IPSEC_MB 00:15:21.674 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:21.674 #define SPDK_CONFIG_ISAL 1 00:15:21.674 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:21.674 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:21.674 #define SPDK_CONFIG_LIBDIR 00:15:21.674 #undef SPDK_CONFIG_LTO 00:15:21.674 #define SPDK_CONFIG_MAX_LCORES 128 00:15:21.674 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:21.674 #define SPDK_CONFIG_NVME_CUSE 1 00:15:21.674 #undef SPDK_CONFIG_OCF 00:15:21.674 #define SPDK_CONFIG_OCF_PATH 00:15:21.674 #define SPDK_CONFIG_OPENSSL_PATH 00:15:21.674 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:21.674 #define SPDK_CONFIG_PGO_DIR 00:15:21.674 #undef SPDK_CONFIG_PGO_USE 00:15:21.674 #define SPDK_CONFIG_PREFIX /usr/local 00:15:21.674 #undef SPDK_CONFIG_RAID5F 00:15:21.674 #undef SPDK_CONFIG_RBD 00:15:21.674 #define SPDK_CONFIG_RDMA 1 00:15:21.674 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:21.674 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:21.674 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:21.674 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:21.674 #undef SPDK_CONFIG_SHARED 00:15:21.674 #undef SPDK_CONFIG_SMA 00:15:21.674 #define SPDK_CONFIG_TESTS 1 00:15:21.674 #undef SPDK_CONFIG_TSAN 00:15:21.674 #define SPDK_CONFIG_UBLK 1 00:15:21.674 #define SPDK_CONFIG_UBSAN 1 00:15:21.674 #undef SPDK_CONFIG_UNIT_TESTS 00:15:21.674 #undef SPDK_CONFIG_URING 00:15:21.674 #define SPDK_CONFIG_URING_PATH 00:15:21.674 #undef SPDK_CONFIG_URING_ZNS 00:15:21.674 #undef SPDK_CONFIG_USDT 00:15:21.674 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:21.674 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:21.674 #define SPDK_CONFIG_VFIO_USER 1 00:15:21.674 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:21.674 #define SPDK_CONFIG_VHOST 1 00:15:21.674 #define SPDK_CONFIG_VIRTIO 1 00:15:21.674 #undef SPDK_CONFIG_VTUNE 00:15:21.674 #define SPDK_CONFIG_VTUNE_DIR 00:15:21.674 #define SPDK_CONFIG_WERROR 1 00:15:21.674 #define SPDK_CONFIG_WPDK_DIR 00:15:21.674 #undef SPDK_CONFIG_XNVME 00:15:21.674 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.674 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:21.675 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:21.676 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 3530081 ]] 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 3530081 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.k41v7u 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.k41v7u/tests/vfio /tmp/spdk.k41v7u 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:15:21.937 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=81414811648 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500290560 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=13085478912 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47245381632 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4763648 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18893955072 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900058112 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=6103040 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=46175830016 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=1074315264 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:15:21.938 * Looking for test storage... 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=81414811648 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=15300071424 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:15:21.938 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1678 -- # set -o errtrace 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1683 -- # true 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1685 -- # xtrace_fd 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:21.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.938 --rc genhtml_branch_coverage=1 00:15:21.938 --rc genhtml_function_coverage=1 00:15:21.938 --rc genhtml_legend=1 00:15:21.938 --rc geninfo_all_blocks=1 00:15:21.938 --rc geninfo_unexecuted_blocks=1 00:15:21.938 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.938 ' 00:15:21.938 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:21.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.938 --rc genhtml_branch_coverage=1 00:15:21.938 --rc genhtml_function_coverage=1 00:15:21.938 --rc genhtml_legend=1 00:15:21.938 --rc geninfo_all_blocks=1 00:15:21.938 --rc geninfo_unexecuted_blocks=1 00:15:21.938 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.938 ' 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:21.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.939 --rc genhtml_branch_coverage=1 00:15:21.939 --rc genhtml_function_coverage=1 00:15:21.939 --rc genhtml_legend=1 00:15:21.939 --rc geninfo_all_blocks=1 00:15:21.939 --rc geninfo_unexecuted_blocks=1 00:15:21.939 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.939 ' 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:21.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.939 --rc genhtml_branch_coverage=1 00:15:21.939 --rc genhtml_function_coverage=1 00:15:21.939 --rc genhtml_legend=1 00:15:21.939 --rc geninfo_all_blocks=1 00:15:21.939 --rc geninfo_unexecuted_blocks=1 00:15:21.939 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:15:21.939 ' 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:15:21.939 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:21.939 16:39:26 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:15:21.939 [2024-11-05 16:39:26.477217] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:21.939 [2024-11-05 16:39:26.477297] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530138 ] 00:15:22.198 [2024-11-05 16:39:26.619945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.198 [2024-11-05 16:39:26.677143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.456 INFO: Running with entropic power schedule (0xFF, 100). 00:15:22.456 INFO: Seed: 1449796689 00:15:22.456 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:22.456 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:22.456 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:15:22.456 INFO: A corpus is not provided, starting from an empty corpus 00:15:22.456 #2 INITED exec/s: 0 rss: 68Mb 00:15:22.456 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:22.456 This may also happen if the target rejected all inputs we tried so far 00:15:22.456 [2024-11-05 16:39:26.953786] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:15:23.281 NEW_FUNC[1/671]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:15:23.281 NEW_FUNC[2/671]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:23.281 #7 NEW cov: 11158 ft: 10931 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 5 CrossOver-CrossOver-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:15:23.281 NEW_FUNC[1/2]: 0x138e1f8 in from_le32 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/endian.h:100 00:15:23.281 NEW_FUNC[2/2]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:23.281 #13 NEW cov: 11201 ft: 14115 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBinInt- 00:15:23.540 #14 NEW cov: 11201 ft: 15393 corp: 4/19b lim: 6 exec/s: 14 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:15:23.799 #15 NEW cov: 11211 ft: 15646 corp: 5/25b lim: 6 exec/s: 15 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:15:24.057 #21 NEW cov: 11211 ft: 16616 corp: 6/31b lim: 6 exec/s: 21 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:15:24.314 #23 NEW cov: 11222 ft: 16854 corp: 7/37b lim: 6 exec/s: 23 rss: 76Mb L: 6/6 MS: 2 EraseBytes-CopyPart- 00:15:24.573 #24 NEW cov: 11222 ft: 17000 corp: 8/43b lim: 6 exec/s: 12 rss: 76Mb L: 6/6 MS: 1 ChangeBinInt- 00:15:24.573 #24 DONE cov: 11222 ft: 17000 corp: 8/43b lim: 6 exec/s: 12 rss: 76Mb 00:15:24.573 Done 24 runs in 2 second(s) 00:15:24.573 [2024-11-05 16:39:29.019957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:15:24.831 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:24.831 16:39:29 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:15:24.831 [2024-11-05 16:39:29.324039] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:24.832 [2024-11-05 16:39:29.324124] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530501 ] 00:15:25.090 [2024-11-05 16:39:29.469047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.090 [2024-11-05 16:39:29.526570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.348 INFO: Running with entropic power schedule (0xFF, 100). 00:15:25.348 INFO: Seed: 12318922 00:15:25.348 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:25.348 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:25.348 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:15:25.348 INFO: A corpus is not provided, starting from an empty corpus 00:15:25.348 #2 INITED exec/s: 0 rss: 68Mb 00:15:25.348 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:25.348 This may also happen if the target rejected all inputs we tried so far 00:15:25.348 [2024-11-05 16:39:29.831651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:15:25.348 [2024-11-05 16:39:29.870794] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:25.348 [2024-11-05 16:39:29.870830] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:25.348 [2024-11-05 16:39:29.870855] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:25.864 NEW_FUNC[1/674]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:15:25.864 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:25.864 #62 NEW cov: 11162 ft: 11110 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 5 ChangeByte-ChangeBinInt-InsertByte-InsertByte-InsertByte- 00:15:26.121 [2024-11-05 16:39:30.472134] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.121 [2024-11-05 16:39:30.472184] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.121 [2024-11-05 16:39:30.472208] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:26.121 NEW_FUNC[1/1]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:26.121 #63 NEW cov: 11193 ft: 13726 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CrossOver- 00:15:26.122 [2024-11-05 16:39:30.642296] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.122 [2024-11-05 16:39:30.642328] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.122 [2024-11-05 16:39:30.642352] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:26.380 #69 NEW cov: 11193 ft: 15400 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBinInt- 00:15:26.380 [2024-11-05 16:39:30.811725] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.380 [2024-11-05 16:39:30.811765] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.380 [2024-11-05 16:39:30.811789] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:26.380 #70 NEW cov: 11193 ft: 15887 corp: 5/17b lim: 4 exec/s: 70 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:15:26.638 [2024-11-05 16:39:30.969833] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.638 [2024-11-05 16:39:30.969863] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.638 [2024-11-05 16:39:30.969901] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:26.638 #71 NEW cov: 11193 ft: 16303 corp: 6/21b lim: 4 exec/s: 71 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:15:26.638 [2024-11-05 16:39:31.134750] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.638 [2024-11-05 16:39:31.134782] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.638 [2024-11-05 16:39:31.134828] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:26.896 #79 NEW cov: 11193 ft: 16541 corp: 7/25b lim: 4 exec/s: 79 rss: 77Mb L: 4/4 MS: 3 EraseBytes-CrossOver-CrossOver- 00:15:26.896 [2024-11-05 16:39:31.292312] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.896 [2024-11-05 16:39:31.292343] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.896 [2024-11-05 16:39:31.292365] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:26.896 #85 NEW cov: 11193 ft: 16638 corp: 8/29b lim: 4 exec/s: 85 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:15:26.896 [2024-11-05 16:39:31.450121] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:26.896 [2024-11-05 16:39:31.450150] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:26.896 [2024-11-05 16:39:31.450172] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:27.154 #86 NEW cov: 11200 ft: 17154 corp: 9/33b lim: 4 exec/s: 86 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:15:27.154 [2024-11-05 16:39:31.608879] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:27.154 [2024-11-05 16:39:31.608908] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:27.154 [2024-11-05 16:39:31.608944] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:27.154 #87 NEW cov: 11200 ft: 17782 corp: 10/37b lim: 4 exec/s: 87 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:15:27.413 [2024-11-05 16:39:31.766433] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:15:27.413 [2024-11-05 16:39:31.766461] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:15:27.413 [2024-11-05 16:39:31.766486] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:15:27.413 #88 NEW cov: 11200 ft: 18186 corp: 11/41b lim: 4 exec/s: 44 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:15:27.413 #88 DONE cov: 11200 ft: 18186 corp: 11/41b lim: 4 exec/s: 44 rss: 77Mb 00:15:27.413 Done 88 runs in 2 second(s) 00:15:27.413 [2024-11-05 16:39:31.881980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:15:27.672 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:27.672 16:39:32 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:15:27.672 [2024-11-05 16:39:32.200455] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:27.673 [2024-11-05 16:39:32.200543] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530864 ] 00:15:27.931 [2024-11-05 16:39:32.343742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.931 [2024-11-05 16:39:32.400818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.189 INFO: Running with entropic power schedule (0xFF, 100). 00:15:28.189 INFO: Seed: 2883851664 00:15:28.189 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:28.189 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:28.189 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:15:28.189 INFO: A corpus is not provided, starting from an empty corpus 00:15:28.189 #2 INITED exec/s: 0 rss: 67Mb 00:15:28.189 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:28.189 This may also happen if the target rejected all inputs we tried so far 00:15:28.189 [2024-11-05 16:39:32.686112] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:15:28.189 [2024-11-05 16:39:32.724698] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:28.706 NEW_FUNC[1/673]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:15:28.706 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:28.706 #12 NEW cov: 11142 ft: 10931 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 5 CrossOver-InsertByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:15:28.706 [2024-11-05 16:39:33.178307] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:28.706 #13 NEW cov: 11159 ft: 14774 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ChangeBinInt- 00:15:28.965 [2024-11-05 16:39:33.358039] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:28.965 NEW_FUNC[1/1]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:28.965 #14 NEW cov: 11176 ft: 15342 corp: 4/25b lim: 8 exec/s: 0 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:15:28.965 [2024-11-05 16:39:33.529171] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:29.223 #20 NEW cov: 11176 ft: 16750 corp: 5/33b lim: 8 exec/s: 20 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:15:29.223 [2024-11-05 16:39:33.711818] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:29.481 #26 NEW cov: 11176 ft: 17023 corp: 6/41b lim: 8 exec/s: 26 rss: 77Mb L: 8/8 MS: 1 CrossOver- 00:15:29.481 [2024-11-05 16:39:33.884689] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:29.481 #37 NEW cov: 11176 ft: 17147 corp: 7/49b lim: 8 exec/s: 37 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:15:29.481 [2024-11-05 16:39:34.067296] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:29.738 #43 NEW cov: 11176 ft: 17234 corp: 8/57b lim: 8 exec/s: 43 rss: 77Mb L: 8/8 MS: 1 CopyPart- 00:15:29.738 [2024-11-05 16:39:34.240865] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:29.994 #44 NEW cov: 11176 ft: 17452 corp: 9/65b lim: 8 exec/s: 44 rss: 77Mb L: 8/8 MS: 1 ChangeByte- 00:15:29.994 [2024-11-05 16:39:34.415473] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:29.994 #45 NEW cov: 11183 ft: 18022 corp: 10/73b lim: 8 exec/s: 45 rss: 77Mb L: 8/8 MS: 1 ChangeBinInt- 00:15:30.302 [2024-11-05 16:39:34.599263] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:15:30.302 #46 NEW cov: 11183 ft: 18047 corp: 11/81b lim: 8 exec/s: 23 rss: 77Mb L: 8/8 MS: 1 ChangeBit- 00:15:30.302 #46 DONE cov: 11183 ft: 18047 corp: 11/81b lim: 8 exec/s: 23 rss: 77Mb 00:15:30.302 Done 46 runs in 2 second(s) 00:15:30.302 [2024-11-05 16:39:34.722956] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:15:30.587 16:39:34 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:15:30.587 16:39:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:30.587 16:39:34 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:15:30.587 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:30.587 16:39:35 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:15:30.587 [2024-11-05 16:39:35.042621] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:30.587 [2024-11-05 16:39:35.042708] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531218 ] 00:15:30.858 [2024-11-05 16:39:35.186191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.858 [2024-11-05 16:39:35.245921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.116 INFO: Running with entropic power schedule (0xFF, 100). 00:15:31.116 INFO: Seed: 1431871702 00:15:31.116 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:31.116 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:31.116 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:15:31.116 INFO: A corpus is not provided, starting from an empty corpus 00:15:31.116 #2 INITED exec/s: 0 rss: 65Mb 00:15:31.116 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:31.116 This may also happen if the target rejected all inputs we tried so far 00:15:31.116 [2024-11-05 16:39:35.527022] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:15:31.682 NEW_FUNC[1/672]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:15:31.682 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:31.682 #42 NEW cov: 11147 ft: 11088 corp: 2/33b lim: 32 exec/s: 0 rss: 73Mb L: 32/32 MS: 5 CopyPart-InsertRepeatedBytes-CopyPart-InsertByte-InsertRepeatedBytes- 00:15:31.682 NEW_FUNC[1/1]: 0x12cae18 in nvmf_qpair_is_admin_queue /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/./nvmf_internal.h:531 00:15:31.682 #43 NEW cov: 11167 ft: 14300 corp: 3/65b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:31.941 NEW_FUNC[1/1]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:31.941 #44 NEW cov: 11184 ft: 14906 corp: 4/97b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 1 ShuffleBytes- 00:15:31.941 #50 NEW cov: 11184 ft: 15544 corp: 5/129b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:15:32.198 #56 NEW cov: 11184 ft: 16675 corp: 6/161b lim: 32 exec/s: 56 rss: 75Mb L: 32/32 MS: 1 ChangeByte- 00:15:32.459 #57 NEW cov: 11184 ft: 17124 corp: 7/193b lim: 32 exec/s: 57 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:15:32.459 #58 NEW cov: 11184 ft: 17330 corp: 8/225b lim: 32 exec/s: 58 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:32.717 #59 NEW cov: 11184 ft: 17468 corp: 9/257b lim: 32 exec/s: 59 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:32.717 #60 NEW cov: 11191 ft: 17638 corp: 10/289b lim: 32 exec/s: 60 rss: 75Mb L: 32/32 MS: 1 ChangeBit- 00:15:32.974 #66 NEW cov: 11191 ft: 17675 corp: 11/321b lim: 32 exec/s: 66 rss: 75Mb L: 32/32 MS: 1 CopyPart- 00:15:33.233 #67 NEW cov: 11191 ft: 17700 corp: 12/353b lim: 32 exec/s: 33 rss: 75Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:33.233 #67 DONE cov: 11191 ft: 17700 corp: 12/353b lim: 32 exec/s: 33 rss: 75Mb 00:15:33.233 Done 67 runs in 2 second(s) 00:15:33.233 [2024-11-05 16:39:37.621965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:15:33.492 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:33.492 16:39:37 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:15:33.492 [2024-11-05 16:39:37.945868] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:33.492 [2024-11-05 16:39:37.945945] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531663 ] 00:15:33.492 [2024-11-05 16:39:38.071099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.750 [2024-11-05 16:39:38.127367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.009 INFO: Running with entropic power schedule (0xFF, 100). 00:15:34.009 INFO: Seed: 14902281 00:15:34.009 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:34.009 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:34.009 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:15:34.009 INFO: A corpus is not provided, starting from an empty corpus 00:15:34.009 #2 INITED exec/s: 0 rss: 67Mb 00:15:34.009 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:34.009 This may also happen if the target rejected all inputs we tried so far 00:15:34.009 [2024-11-05 16:39:38.404910] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:15:34.009 [2024-11-05 16:39:38.444768] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587072 > max 8796093022208 00:15:34.009 [2024-11-05 16:39:38.444805] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x3e3e2b0000000000) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:34.009 [2024-11-05 16:39:38.444822] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:34.009 [2024-11-05 16:39:38.444845] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:34.009 [2024-11-05 16:39:38.445736] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x3e3e2b0000000000) flags=0: No such file or directory 00:15:34.009 [2024-11-05 16:39:38.445755] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:34.009 [2024-11-05 16:39:38.445776] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:34.267 NEW_FUNC[1/674]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:15:34.267 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:34.267 #92 NEW cov: 11164 ft: 11113 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 5 CrossOver-ChangeByte-InsertRepeatedBytes-CopyPart-InsertRepeatedBytes- 00:15:34.526 #100 NEW cov: 11183 ft: 14243 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 3 InsertByte-ChangeBit-InsertRepeatedBytes- 00:15:34.526 [2024-11-05 16:39:39.104209] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587072 > max 8796093022208 00:15:34.526 [2024-11-05 16:39:39.104263] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x8000000, 0x3e3e2b0008000000) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:34.526 [2024-11-05 16:39:39.104285] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:34.526 [2024-11-05 16:39:39.104308] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:34.526 [2024-11-05 16:39:39.105221] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x8000000, 0x3e3e2b0008000000) flags=0: No such file or directory 00:15:34.526 [2024-11-05 16:39:39.105247] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:34.526 [2024-11-05 16:39:39.105269] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:34.785 NEW_FUNC[1/1]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:34.785 #101 NEW cov: 11203 ft: 15092 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 ChangeBit- 00:15:34.785 [2024-11-05 16:39:39.286118] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587072 > max 8796093022208 00:15:34.785 [2024-11-05 16:39:39.286152] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x3e3e2b0000000000) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:34.785 [2024-11-05 16:39:39.286169] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:34.785 [2024-11-05 16:39:39.286190] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:34.785 [2024-11-05 16:39:39.287140] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x3e3e2b0000000000) flags=0: No such file or directory 00:15:34.785 [2024-11-05 16:39:39.287166] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:34.785 [2024-11-05 16:39:39.287188] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:35.043 #102 NEW cov: 11203 ft: 15897 corp: 5/129b lim: 32 exec/s: 102 rss: 76Mb L: 32/32 MS: 1 ShuffleBytes- 00:15:35.043 [2024-11-05 16:39:39.467890] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587072 > max 8796093022208 00:15:35.043 [2024-11-05 16:39:39.467922] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x8000000, 0x3e3e2b0008000000) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:35.043 [2024-11-05 16:39:39.467939] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:35.043 [2024-11-05 16:39:39.467962] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:35.043 [2024-11-05 16:39:39.468906] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x8000000, 0x3e3e2b0008000000) flags=0: No such file or directory 00:15:35.043 [2024-11-05 16:39:39.468932] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:35.043 [2024-11-05 16:39:39.468954] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:35.044 #108 NEW cov: 11203 ft: 16502 corp: 6/161b lim: 32 exec/s: 108 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:15:35.300 #109 NEW cov: 11203 ft: 17077 corp: 7/193b lim: 32 exec/s: 109 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:35.300 [2024-11-05 16:39:39.839808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587072 > max 8796093022208 00:15:35.300 [2024-11-05 16:39:39.839840] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x3e3e3e2b000000, 0x3e7c693e2b000000) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:35.300 [2024-11-05 16:39:39.839857] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:35.300 [2024-11-05 16:39:39.839879] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:35.300 [2024-11-05 16:39:39.840823] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x3e3e3e2b000000, 0x3e7c693e2b000000) flags=0: No such file or directory 00:15:35.300 [2024-11-05 16:39:39.840849] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:35.300 [2024-11-05 16:39:39.840870] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:35.557 #110 NEW cov: 11203 ft: 17282 corp: 8/225b lim: 32 exec/s: 110 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:15:35.557 [2024-11-05 16:39:40.021914] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587071 > max 8796093022208 00:15:35.557 [2024-11-05 16:39:40.021947] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0xfffa000000000000, 0x3e382affffffffff) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:35.557 [2024-11-05 16:39:40.021965] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:35.557 [2024-11-05 16:39:40.021987] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:35.557 [2024-11-05 16:39:40.022941] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0xfffa000000000000, 0x3e382affffffffff) flags=0: No such file or directory 00:15:35.557 [2024-11-05 16:39:40.022968] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:35.558 [2024-11-05 16:39:40.022989] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:35.558 #111 NEW cov: 11203 ft: 17339 corp: 9/257b lim: 32 exec/s: 111 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:35.816 #112 NEW cov: 11210 ft: 17512 corp: 10/289b lim: 32 exec/s: 112 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:15:35.816 [2024-11-05 16:39:40.391072] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 4485069557907587072 > max 8796093022208 00:15:35.816 [2024-11-05 16:39:40.391109] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0x80000, 0x3e3e2b0000080000) offset=0x3e3e3e3e3e3e3e3e flags=0x3: No space left on device 00:15:35.816 [2024-11-05 16:39:40.391126] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:15:35.816 [2024-11-05 16:39:40.391149] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:15:35.816 [2024-11-05 16:39:40.392113] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0x80000, 0x3e3e2b0000080000) flags=0: No such file or directory 00:15:35.816 [2024-11-05 16:39:40.392138] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:15:35.816 [2024-11-05 16:39:40.392160] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:15:36.075 #113 NEW cov: 11210 ft: 17520 corp: 11/321b lim: 32 exec/s: 56 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:15:36.075 #113 DONE cov: 11210 ft: 17520 corp: 11/321b lim: 32 exec/s: 56 rss: 77Mb 00:15:36.075 Done 113 runs in 2 second(s) 00:15:36.075 [2024-11-05 16:39:40.520964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:15:36.334 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:36.334 16:39:40 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:15:36.334 [2024-11-05 16:39:40.848252] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:36.334 [2024-11-05 16:39:40.848349] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532104 ] 00:15:36.593 [2024-11-05 16:39:40.995391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.593 [2024-11-05 16:39:41.052233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.852 INFO: Running with entropic power schedule (0xFF, 100). 00:15:36.852 INFO: Seed: 2941901867 00:15:36.852 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:36.852 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:36.852 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:15:36.852 INFO: A corpus is not provided, starting from an empty corpus 00:15:36.852 #2 INITED exec/s: 0 rss: 67Mb 00:15:36.852 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:36.852 This may also happen if the target rejected all inputs we tried so far 00:15:36.852 [2024-11-05 16:39:41.331627] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:15:36.852 [2024-11-05 16:39:41.368780] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:36.852 [2024-11-05 16:39:41.368827] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:37.369 NEW_FUNC[1/674]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:15:37.369 NEW_FUNC[2/674]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:37.369 #8 NEW cov: 11162 ft: 11117 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:15:37.369 [2024-11-05 16:39:41.828678] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:37.369 [2024-11-05 16:39:41.828741] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:37.369 #24 NEW cov: 11178 ft: 14742 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 1 ChangeBit- 00:15:37.627 [2024-11-05 16:39:42.009986] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:37.627 [2024-11-05 16:39:42.010031] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:37.627 NEW_FUNC[1/1]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:37.627 #25 NEW cov: 11195 ft: 15396 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 CopyPart- 00:15:37.627 [2024-11-05 16:39:42.180318] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:37.627 [2024-11-05 16:39:42.180358] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:37.885 #36 NEW cov: 11195 ft: 16540 corp: 5/53b lim: 13 exec/s: 36 rss: 76Mb L: 13/13 MS: 1 CrossOver- 00:15:37.885 [2024-11-05 16:39:42.358760] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:37.885 [2024-11-05 16:39:42.358801] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:37.885 #37 NEW cov: 11195 ft: 16929 corp: 6/66b lim: 13 exec/s: 37 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:15:38.143 [2024-11-05 16:39:42.528124] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:38.143 [2024-11-05 16:39:42.528163] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:38.143 #43 NEW cov: 11195 ft: 17187 corp: 7/79b lim: 13 exec/s: 43 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:15:38.143 [2024-11-05 16:39:42.696304] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:38.143 [2024-11-05 16:39:42.696341] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:38.402 #44 NEW cov: 11195 ft: 17993 corp: 8/92b lim: 13 exec/s: 44 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:15:38.402 [2024-11-05 16:39:42.865665] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:38.402 [2024-11-05 16:39:42.865704] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:38.402 #50 NEW cov: 11195 ft: 18257 corp: 9/105b lim: 13 exec/s: 50 rss: 77Mb L: 13/13 MS: 1 CrossOver- 00:15:38.661 [2024-11-05 16:39:43.035936] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:38.661 [2024-11-05 16:39:43.035974] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:38.661 #51 NEW cov: 11202 ft: 18532 corp: 10/118b lim: 13 exec/s: 51 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:15:38.661 [2024-11-05 16:39:43.205061] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:38.661 [2024-11-05 16:39:43.205100] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:38.919 #52 NEW cov: 11202 ft: 18891 corp: 11/131b lim: 13 exec/s: 26 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:15:38.919 #52 DONE cov: 11202 ft: 18891 corp: 11/131b lim: 13 exec/s: 26 rss: 77Mb 00:15:38.919 Done 52 runs in 2 second(s) 00:15:38.919 [2024-11-05 16:39:43.327973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:15:39.178 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:15:39.178 16:39:43 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:15:39.178 [2024-11-05 16:39:43.655222] Starting SPDK v25.01-pre git sha1 4c618f461 / DPDK 24.03.0 initialization... 00:15:39.178 [2024-11-05 16:39:43.655305] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532464 ] 00:15:39.437 [2024-11-05 16:39:43.800793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.437 [2024-11-05 16:39:43.857301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.695 INFO: Running with entropic power schedule (0xFF, 100). 00:15:39.695 INFO: Seed: 1449930571 00:15:39.695 INFO: Loaded 1 modules (384647 inline 8-bit counters): 384647 [0x2bfb24c, 0x2c590d3), 00:15:39.695 INFO: Loaded 1 PC tables (384647 PCs): 384647 [0x2c590d8,0x3237948), 00:15:39.695 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:15:39.695 INFO: A corpus is not provided, starting from an empty corpus 00:15:39.695 #2 INITED exec/s: 0 rss: 67Mb 00:15:39.695 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:15:39.695 This may also happen if the target rejected all inputs we tried so far 00:15:39.695 [2024-11-05 16:39:44.134835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:15:39.695 [2024-11-05 16:39:44.174766] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:39.695 [2024-11-05 16:39:44.174807] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:40.212 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:15:40.212 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:15:40.212 #24 NEW cov: 11133 ft: 11102 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 2 ChangeBit-CMP- DE: "C\221\277\246Q\237:\000"- 00:15:40.212 [2024-11-05 16:39:44.639198] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:40.212 [2024-11-05 16:39:44.639254] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:40.212 NEW_FUNC[1/1]: 0x15a0138 in index_to_sg_t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:677 00:15:40.212 #28 NEW cov: 11167 ft: 14575 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 4 EraseBytes-ChangeBit-ChangeBinInt-CopyPart- 00:15:40.470 [2024-11-05 16:39:44.821827] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:40.470 [2024-11-05 16:39:44.821873] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:40.470 NEW_FUNC[1/1]: 0x1bfc8a8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:15:40.470 #29 NEW cov: 11184 ft: 15141 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:15:40.470 [2024-11-05 16:39:44.993126] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:40.470 [2024-11-05 16:39:44.993167] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:40.729 #35 NEW cov: 11184 ft: 16326 corp: 5/37b lim: 9 exec/s: 35 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:15:40.729 [2024-11-05 16:39:45.165679] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:40.729 [2024-11-05 16:39:45.165728] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:40.729 #36 NEW cov: 11184 ft: 17161 corp: 6/46b lim: 9 exec/s: 36 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:15:40.987 [2024-11-05 16:39:45.337542] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:40.987 [2024-11-05 16:39:45.337582] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:40.987 #42 NEW cov: 11187 ft: 17441 corp: 7/55b lim: 9 exec/s: 42 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:15:40.987 [2024-11-05 16:39:45.509231] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:40.987 [2024-11-05 16:39:45.509272] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:41.245 #43 NEW cov: 11187 ft: 17544 corp: 8/64b lim: 9 exec/s: 43 rss: 77Mb L: 9/9 MS: 1 CrossOver- 00:15:41.245 [2024-11-05 16:39:45.681296] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:41.245 [2024-11-05 16:39:45.681336] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:41.245 #44 NEW cov: 11187 ft: 17779 corp: 9/73b lim: 9 exec/s: 44 rss: 77Mb L: 9/9 MS: 1 ChangeByte- 00:15:41.504 [2024-11-05 16:39:45.851686] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:41.504 [2024-11-05 16:39:45.851734] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:41.504 #50 NEW cov: 11194 ft: 17913 corp: 10/82b lim: 9 exec/s: 50 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:15:41.504 [2024-11-05 16:39:46.024701] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:15:41.504 [2024-11-05 16:39:46.024752] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:15:41.762 #51 NEW cov: 11194 ft: 18303 corp: 11/91b lim: 9 exec/s: 25 rss: 77Mb L: 9/9 MS: 1 CopyPart- 00:15:41.762 #51 DONE cov: 11194 ft: 18303 corp: 11/91b lim: 9 exec/s: 25 rss: 77Mb 00:15:41.762 ###### Recommended dictionary. ###### 00:15:41.762 "C\221\277\246Q\237:\000" # Uses: 0 00:15:41.762 ###### End of recommended dictionary. ###### 00:15:41.762 Done 51 runs in 2 second(s) 00:15:41.762 [2024-11-05 16:39:46.150961] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:15:42.021 16:39:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:15:42.021 16:39:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:15:42.021 16:39:46 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:15:42.021 16:39:46 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:15:42.021 00:15:42.021 real 0m20.497s 00:15:42.021 user 0m27.482s 00:15:42.021 sys 0m2.432s 00:15:42.021 16:39:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.021 16:39:46 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.021 ************************************ 00:15:42.021 END TEST vfio_llvm_fuzz 00:15:42.021 ************************************ 00:15:42.021 00:15:42.021 real 1m27.833s 00:15:42.021 user 2m7.221s 00:15:42.021 sys 0m11.910s 00:15:42.021 16:39:46 llvm_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.021 16:39:46 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:42.021 ************************************ 00:15:42.021 END TEST llvm_fuzz 00:15:42.021 ************************************ 00:15:42.021 16:39:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:15:42.021 16:39:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:15:42.021 16:39:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:15:42.021 16:39:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.021 16:39:46 -- common/autotest_common.sh@10 -- # set +x 00:15:42.021 16:39:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:15:42.021 16:39:46 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:15:42.021 16:39:46 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:15:42.021 16:39:46 -- common/autotest_common.sh@10 -- # set +x 00:15:47.292 INFO: APP EXITING 00:15:47.292 INFO: killing all VMs 00:15:47.292 INFO: killing vhost app 00:15:47.292 WARN: no vhost pid file found 00:15:47.292 INFO: EXIT DONE 00:15:50.577 Waiting for block devices as requested 00:15:50.577 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:15:50.577 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:15:50.577 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:15:50.577 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:15:50.577 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:15:50.577 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:15:50.835 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:15:50.835 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:15:50.835 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:15:51.093 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:15:51.093 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:15:51.093 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:15:51.352 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:15:51.352 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:15:51.352 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:15:51.352 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:15:51.610 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:15:58.172 Cleaning 00:15:58.172 Removing: /dev/shm/spdk_tgt_trace.pid3509345 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3506861 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3507988 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3509345 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3509795 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3510576 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3510629 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3511509 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3511550 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3511896 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3512130 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3512369 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3512623 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3512866 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3513066 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3513257 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3513534 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3514068 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3516747 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3517123 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3517332 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3517341 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3517728 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3517856 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3518289 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3518297 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3518625 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3518676 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3518878 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3518890 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3519344 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3519546 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3519738 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3519978 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3520569 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3520922 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3521285 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3521638 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3522000 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3522359 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3522712 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3523072 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3523425 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3523790 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3524119 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3524441 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3524754 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3525073 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3525418 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3525774 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3526219 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3526614 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3527406 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3527807 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3528178 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3528553 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3528939 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3529309 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3529679 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3530138 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3530501 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3530864 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3531218 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3531663 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3532104 00:15:58.172 Removing: /var/run/dpdk/spdk_pid3532464 00:15:58.172 Clean 00:15:58.172 16:40:02 -- common/autotest_common.sh@1451 -- # return 0 00:15:58.172 16:40:02 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:15:58.172 16:40:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.172 16:40:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.172 16:40:02 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:15:58.172 16:40:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.172 16:40:02 -- common/autotest_common.sh@10 -- # set +x 00:15:58.172 16:40:02 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:15:58.172 16:40:02 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:15:58.172 16:40:02 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:15:58.172 16:40:02 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:15:58.172 16:40:02 -- spdk/autotest.sh@394 -- # hostname 00:15:58.172 16:40:02 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-39 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:15:58.172 geninfo: WARNING: invalid characters removed from testname! 00:16:03.442 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:16:04.009 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:16:08.200 16:40:11 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:16:20.540 16:40:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:16:28.654 16:40:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:16:36.767 16:40:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:16:44.882 16:40:49 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:16:52.995 16:40:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:17:02.968 16:41:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:02.968 16:41:05 -- spdk/autorun.sh@1 -- $ timing_finish 00:17:02.968 16:41:05 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt ]] 00:17:02.968 16:41:05 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:02.968 16:41:05 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:02.968 16:41:05 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:17:02.968 + [[ -n 3393519 ]] 00:17:02.968 + sudo kill 3393519 00:17:02.979 [Pipeline] } 00:17:02.994 [Pipeline] // stage 00:17:02.999 [Pipeline] } 00:17:03.010 [Pipeline] // timeout 00:17:03.015 [Pipeline] } 00:17:03.029 [Pipeline] // catchError 00:17:03.034 [Pipeline] } 00:17:03.050 [Pipeline] // wrap 00:17:03.056 [Pipeline] } 00:17:03.069 [Pipeline] // catchError 00:17:03.079 [Pipeline] stage 00:17:03.082 [Pipeline] { (Epilogue) 00:17:03.096 [Pipeline] catchError 00:17:03.097 [Pipeline] { 00:17:03.111 [Pipeline] echo 00:17:03.113 Cleanup processes 00:17:03.119 [Pipeline] sh 00:17:03.404 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:17:03.404 3539808 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:17:03.418 [Pipeline] sh 00:17:03.702 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:17:03.702 ++ grep -v 'sudo pgrep' 00:17:03.702 ++ awk '{print $1}' 00:17:03.702 + sudo kill -9 00:17:03.702 + true 00:17:03.713 [Pipeline] sh 00:17:03.994 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:22.090 [Pipeline] sh 00:17:22.371 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:22.371 Artifacts sizes are good 00:17:22.384 [Pipeline] archiveArtifacts 00:17:22.391 Archiving artifacts 00:17:22.514 [Pipeline] sh 00:17:22.796 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:17:22.812 [Pipeline] cleanWs 00:17:22.822 [WS-CLEANUP] Deleting project workspace... 00:17:22.822 [WS-CLEANUP] Deferred wipeout is used... 00:17:22.829 [WS-CLEANUP] done 00:17:22.831 [Pipeline] } 00:17:22.847 [Pipeline] // catchError 00:17:22.859 [Pipeline] sh 00:17:23.144 + logger -p user.info -t JENKINS-CI 00:17:23.153 [Pipeline] } 00:17:23.167 [Pipeline] // stage 00:17:23.173 [Pipeline] } 00:17:23.187 [Pipeline] // node 00:17:23.192 [Pipeline] End of Pipeline 00:17:23.230 Finished: SUCCESS