00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2379 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3640 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.019 The recommended git tool is: git 00:00:00.019 using credential 00000000-0000-0000-0000-000000000002 00:00:00.022 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.035 Fetching changes from the remote Git repository 00:00:00.038 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.055 Using shallow fetch with depth 1 00:00:00.055 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.055 > git --version # timeout=10 00:00:00.074 > git --version # 'git version 2.39.2' 00:00:00.074 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.107 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.107 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.968 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.977 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.988 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.988 > git config core.sparsecheckout # timeout=10 00:00:02.999 > git read-tree -mu HEAD # timeout=10 00:00:03.014 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.031 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.031 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.094 [Pipeline] Start of Pipeline 00:00:03.106 [Pipeline] library 00:00:03.107 Loading library shm_lib@master 00:00:03.108 Library shm_lib@master is cached. Copying from home. 00:00:03.127 [Pipeline] node 00:00:03.142 Running on WFP20 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:03.144 [Pipeline] { 00:00:03.155 [Pipeline] catchError 00:00:03.157 [Pipeline] { 00:00:03.170 [Pipeline] wrap 00:00:03.178 [Pipeline] { 00:00:03.186 [Pipeline] stage 00:00:03.188 [Pipeline] { (Prologue) 00:00:03.386 [Pipeline] sh 00:00:03.672 + logger -p user.info -t JENKINS-CI 00:00:03.689 [Pipeline] echo 00:00:03.690 Node: WFP20 00:00:03.697 [Pipeline] sh 00:00:03.998 [Pipeline] setCustomBuildProperty 00:00:04.010 [Pipeline] echo 00:00:04.012 Cleanup processes 00:00:04.017 [Pipeline] sh 00:00:04.301 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.301 1177834 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.314 [Pipeline] sh 00:00:04.599 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:04.599 ++ grep -v 'sudo pgrep' 00:00:04.599 ++ awk '{print $1}' 00:00:04.599 + sudo kill -9 00:00:04.599 + true 00:00:04.613 [Pipeline] cleanWs 00:00:04.624 [WS-CLEANUP] Deleting project workspace... 00:00:04.624 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.630 [WS-CLEANUP] done 00:00:04.634 [Pipeline] setCustomBuildProperty 00:00:04.647 [Pipeline] sh 00:00:04.925 + sudo git config --global --replace-all safe.directory '*' 00:00:05.015 [Pipeline] httpRequest 00:00:05.919 [Pipeline] echo 00:00:05.920 Sorcerer 10.211.164.20 is alive 00:00:05.931 [Pipeline] retry 00:00:05.933 [Pipeline] { 00:00:05.945 [Pipeline] httpRequest 00:00:05.948 HttpMethod: GET 00:00:05.949 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.949 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.952 Response Code: HTTP/1.1 200 OK 00:00:05.953 Success: Status code 200 is in the accepted range: 200,404 00:00:05.953 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.570 [Pipeline] } 00:00:06.583 [Pipeline] // retry 00:00:06.589 [Pipeline] sh 00:00:06.870 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.885 [Pipeline] httpRequest 00:00:08.069 [Pipeline] echo 00:00:08.071 Sorcerer 10.211.164.20 is alive 00:00:08.077 [Pipeline] retry 00:00:08.078 [Pipeline] { 00:00:08.090 [Pipeline] httpRequest 00:00:08.094 HttpMethod: GET 00:00:08.094 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.095 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:08.110 Response Code: HTTP/1.1 200 OK 00:00:08.110 Success: Status code 200 is in the accepted range: 200,404 00:00:08.111 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:02.682 [Pipeline] } 00:01:02.702 [Pipeline] // retry 00:01:02.710 [Pipeline] sh 00:01:02.998 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:05.553 [Pipeline] sh 00:01:05.841 + git -C spdk log --oneline -n5 00:01:05.841 c13c99a5e test: Various fixes for Fedora40 00:01:05.841 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:05.841 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:05.841 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:05.841 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:05.852 [Pipeline] } 00:01:05.865 [Pipeline] // stage 00:01:05.872 [Pipeline] stage 00:01:05.874 [Pipeline] { (Prepare) 00:01:05.889 [Pipeline] writeFile 00:01:05.902 [Pipeline] sh 00:01:06.188 + logger -p user.info -t JENKINS-CI 00:01:06.203 [Pipeline] sh 00:01:06.494 + logger -p user.info -t JENKINS-CI 00:01:06.509 [Pipeline] sh 00:01:06.800 + cat autorun-spdk.conf 00:01:06.800 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.800 SPDK_TEST_FUZZER_SHORT=1 00:01:06.800 SPDK_TEST_FUZZER=1 00:01:06.800 SPDK_RUN_UBSAN=1 00:01:06.808 RUN_NIGHTLY=1 00:01:06.813 [Pipeline] readFile 00:01:06.838 [Pipeline] withEnv 00:01:06.840 [Pipeline] { 00:01:06.853 [Pipeline] sh 00:01:07.141 + set -ex 00:01:07.141 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:01:07.141 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:07.141 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.141 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:07.141 ++ SPDK_TEST_FUZZER=1 00:01:07.141 ++ SPDK_RUN_UBSAN=1 00:01:07.141 ++ RUN_NIGHTLY=1 00:01:07.141 + case $SPDK_TEST_NVMF_NICS in 00:01:07.141 + DRIVERS= 00:01:07.141 + [[ -n '' ]] 00:01:07.141 + exit 0 00:01:07.151 [Pipeline] } 00:01:07.166 [Pipeline] // withEnv 00:01:07.172 [Pipeline] } 00:01:07.188 [Pipeline] // stage 00:01:07.199 [Pipeline] catchError 00:01:07.200 [Pipeline] { 00:01:07.215 [Pipeline] timeout 00:01:07.215 Timeout set to expire in 30 min 00:01:07.217 [Pipeline] { 00:01:07.232 [Pipeline] stage 00:01:07.234 [Pipeline] { (Tests) 00:01:07.248 [Pipeline] sh 00:01:07.536 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.537 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.537 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.537 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:01:07.537 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:07.537 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.537 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:01:07.537 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.537 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:01:07.537 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:01:07.537 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:01:07.537 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:01:07.537 + source /etc/os-release 00:01:07.537 ++ NAME='Fedora Linux' 00:01:07.537 ++ VERSION='39 (Cloud Edition)' 00:01:07.537 ++ ID=fedora 00:01:07.537 ++ VERSION_ID=39 00:01:07.537 ++ VERSION_CODENAME= 00:01:07.537 ++ PLATFORM_ID=platform:f39 00:01:07.537 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:07.537 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.537 ++ LOGO=fedora-logo-icon 00:01:07.537 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:07.537 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.537 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:07.537 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.537 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.537 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.537 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:07.537 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.537 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:07.537 ++ SUPPORT_END=2024-11-12 00:01:07.537 ++ VARIANT='Cloud Edition' 00:01:07.537 ++ VARIANT_ID=cloud 00:01:07.537 + uname -a 00:01:07.537 Linux spdk-wfp-20 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:07.537 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:01:10.080 Hugepages 00:01:10.080 node hugesize free / total 00:01:10.080 node0 1048576kB 0 / 0 00:01:10.080 node0 2048kB 0 / 0 00:01:10.080 node1 1048576kB 0 / 0 00:01:10.080 node1 2048kB 0 / 0 00:01:10.080 00:01:10.080 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:10.080 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:10.080 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:10.080 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:10.080 + rm -f /tmp/spdk-ld-path 00:01:10.080 + source autorun-spdk.conf 00:01:10.080 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.080 ++ SPDK_TEST_FUZZER_SHORT=1 00:01:10.080 ++ SPDK_TEST_FUZZER=1 00:01:10.080 ++ SPDK_RUN_UBSAN=1 00:01:10.080 ++ RUN_NIGHTLY=1 00:01:10.080 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:10.080 + [[ -n '' ]] 00:01:10.080 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:10.080 + for M in /var/spdk/build-*-manifest.txt 00:01:10.080 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:10.080 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.080 + for M in /var/spdk/build-*-manifest.txt 00:01:10.080 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:10.080 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.080 + for M in /var/spdk/build-*-manifest.txt 00:01:10.080 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:10.080 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:01:10.080 ++ uname 00:01:10.080 + [[ Linux == \L\i\n\u\x ]] 00:01:10.080 + sudo dmesg -T 00:01:10.080 + sudo dmesg --clear 00:01:10.080 + dmesg_pid=1179229 00:01:10.080 + [[ Fedora Linux == FreeBSD ]] 00:01:10.080 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.080 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:10.080 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:10.080 + [[ -x /usr/src/fio-static/fio ]] 00:01:10.080 + export FIO_BIN=/usr/src/fio-static/fio 00:01:10.080 + FIO_BIN=/usr/src/fio-static/fio 00:01:10.080 + sudo dmesg -Tw 00:01:10.080 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:10.080 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:10.080 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:10.080 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.080 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:10.080 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:10.080 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.080 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:10.080 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:01:10.080 Test configuration: 00:01:10.080 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.080 SPDK_TEST_FUZZER_SHORT=1 00:01:10.080 SPDK_TEST_FUZZER=1 00:01:10.080 SPDK_RUN_UBSAN=1 00:01:10.080 RUN_NIGHTLY=1 23:01:06 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:10.080 23:01:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:01:10.341 23:01:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:10.341 23:01:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:10.341 23:01:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:10.341 23:01:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.341 23:01:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.341 23:01:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.341 23:01:06 -- paths/export.sh@5 -- $ export PATH 00:01:10.341 23:01:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:10.341 23:01:06 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:01:10.341 23:01:06 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:10.341 23:01:06 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731880866.XXXXXX 00:01:10.341 23:01:06 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731880866.mgGvec 00:01:10.341 23:01:06 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:10.341 23:01:06 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:10.341 23:01:06 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:01:10.341 23:01:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:10.341 23:01:06 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:10.341 23:01:06 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:10.341 23:01:06 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:10.341 23:01:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.341 23:01:06 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:10.341 23:01:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:10.341 23:01:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:10.341 23:01:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:01:10.341 23:01:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:10.341 Sun Nov 17 10:01:06 PM UTC 2024 00:01:10.341 23:01:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:10.341 LTS-67-gc13c99a5e 00:01:10.341 23:01:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:10.341 23:01:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:10.341 23:01:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:10.341 23:01:06 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:10.341 23:01:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:10.341 23:01:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.341 ************************************ 00:01:10.341 START TEST ubsan 00:01:10.341 ************************************ 00:01:10.341 23:01:06 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:10.341 using ubsan 00:01:10.341 00:01:10.341 real 0m0.000s 00:01:10.341 user 0m0.000s 00:01:10.341 sys 0m0.000s 00:01:10.342 23:01:06 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:10.342 23:01:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.342 ************************************ 00:01:10.342 END TEST ubsan 00:01:10.342 ************************************ 00:01:10.342 23:01:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:10.342 23:01:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:10.342 23:01:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:10.342 23:01:06 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:01:10.342 23:01:06 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:01:10.342 23:01:06 -- common/autobuild_common.sh@428 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:01:10.342 23:01:06 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:10.342 23:01:06 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:10.342 23:01:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:10.342 ************************************ 00:01:10.342 START TEST autobuild_llvm_precompile 00:01:10.342 ************************************ 00:01:10.342 23:01:06 -- common/autotest_common.sh@1114 -- $ _llvm_precompile 00:01:10.342 23:01:06 -- common/autobuild_common.sh@32 -- $ clang --version 00:01:10.342 23:01:06 -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:01:10.342 Target: x86_64-redhat-linux-gnu 00:01:10.342 Thread model: posix 00:01:10.342 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:01:10.342 23:01:06 -- common/autobuild_common.sh@33 -- $ clang_num=17 00:01:10.342 23:01:06 -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:01:10.342 23:01:06 -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:01:10.342 23:01:06 -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:01:10.342 23:01:06 -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:01:10.342 23:01:06 -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:01:10.342 23:01:06 -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:10.342 23:01:06 -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:01:10.342 23:01:06 -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:01:10.342 23:01:06 -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:10.602 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:10.602 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:10.862 Using 'verbs' RDMA provider 00:01:26.731 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:38.958 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:38.958 Creating mk/config.mk...done. 00:01:38.958 Creating mk/cc.flags.mk...done. 00:01:38.958 Type 'make' to build. 00:01:38.958 00:01:38.958 real 0m27.990s 00:01:38.958 user 0m12.309s 00:01:38.958 sys 0m14.928s 00:01:38.958 23:01:34 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:38.958 23:01:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.958 ************************************ 00:01:38.958 END TEST autobuild_llvm_precompile 00:01:38.958 ************************************ 00:01:38.958 23:01:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.958 23:01:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.958 23:01:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.958 23:01:34 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:01:38.958 23:01:34 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:01:38.958 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:01:38.958 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:01:38.958 Using 'verbs' RDMA provider 00:01:51.753 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:03.970 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:03.970 Creating mk/config.mk...done. 00:02:03.970 Creating mk/cc.flags.mk...done. 00:02:03.970 Type 'make' to build. 00:02:03.970 23:01:58 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:03.970 23:01:58 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:03.970 23:01:58 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:03.970 23:01:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.970 ************************************ 00:02:03.970 START TEST make 00:02:03.970 ************************************ 00:02:03.970 23:01:58 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:03.970 make[1]: Nothing to be done for 'all'. 00:02:04.229 The Meson build system 00:02:04.229 Version: 1.5.0 00:02:04.229 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:02:04.229 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:04.229 Build type: native build 00:02:04.229 Project name: libvfio-user 00:02:04.229 Project version: 0.0.1 00:02:04.229 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:04.229 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:04.229 Host machine cpu family: x86_64 00:02:04.229 Host machine cpu: x86_64 00:02:04.229 Run-time dependency threads found: YES 00:02:04.229 Library dl found: YES 00:02:04.229 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:04.229 Run-time dependency json-c found: YES 0.17 00:02:04.229 Run-time dependency cmocka found: YES 1.1.7 00:02:04.229 Program pytest-3 found: NO 00:02:04.229 Program flake8 found: NO 00:02:04.229 Program misspell-fixer found: NO 00:02:04.229 Program restructuredtext-lint found: NO 00:02:04.229 Program valgrind found: YES (/usr/bin/valgrind) 00:02:04.229 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.229 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.229 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.229 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.229 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:04.229 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:04.229 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:04.229 Build targets in project: 8 00:02:04.229 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:04.229 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:04.229 00:02:04.229 libvfio-user 0.0.1 00:02:04.229 00:02:04.229 User defined options 00:02:04.229 buildtype : debug 00:02:04.229 default_library: static 00:02:04.229 libdir : /usr/local/lib 00:02:04.229 00:02:04.229 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.798 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:04.798 [1/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:04.798 [2/36] Compiling C object samples/lspci.p/lspci.c.o 00:02:04.798 [3/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:02:04.798 [4/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:04.798 [5/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:02:04.798 [6/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:02:04.798 [7/36] Compiling C object samples/null.p/null.c.o 00:02:04.798 [8/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:02:04.798 [9/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:04.799 [10/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:04.799 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:04.799 [12/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:02:04.799 [13/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:04.799 [14/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:04.799 [15/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:04.799 [16/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:04.799 [17/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:02:04.799 [18/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:02:04.799 [19/36] Compiling C object test/unit_tests.p/mocks.c.o 00:02:04.799 [20/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:04.799 [21/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:04.799 [22/36] Compiling C object samples/server.p/server.c.o 00:02:04.799 [23/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:04.799 [24/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:04.799 [25/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:04.799 [26/36] Compiling C object samples/client.p/client.c.o 00:02:04.799 [27/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:04.799 [28/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:02:04.799 [29/36] Linking target samples/client 00:02:04.799 [30/36] Linking static target lib/libvfio-user.a 00:02:04.799 [31/36] Linking target test/unit_tests 00:02:05.057 [32/36] Linking target samples/server 00:02:05.057 [33/36] Linking target samples/null 00:02:05.057 [34/36] Linking target samples/lspci 00:02:05.057 [35/36] Linking target samples/gpio-pci-idio-16 00:02:05.057 [36/36] Linking target samples/shadow_ioeventfd_server 00:02:05.057 INFO: autodetecting backend as ninja 00:02:05.057 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.057 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.315 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.315 ninja: no work to do. 00:02:10.592 The Meson build system 00:02:10.592 Version: 1.5.0 00:02:10.592 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:02:10.592 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:02:10.592 Build type: native build 00:02:10.592 Program cat found: YES (/usr/bin/cat) 00:02:10.592 Project name: DPDK 00:02:10.592 Project version: 23.11.0 00:02:10.592 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:02:10.592 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:02:10.592 Host machine cpu family: x86_64 00:02:10.592 Host machine cpu: x86_64 00:02:10.592 Message: ## Building in Developer Mode ## 00:02:10.592 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:10.592 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:10.592 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:10.592 Program python3 found: YES (/usr/bin/python3) 00:02:10.592 Program cat found: YES (/usr/bin/cat) 00:02:10.592 Compiler for C supports arguments -march=native: YES 00:02:10.592 Checking for size of "void *" : 8 00:02:10.592 Checking for size of "void *" : 8 (cached) 00:02:10.592 Library m found: YES 00:02:10.592 Library numa found: YES 00:02:10.592 Has header "numaif.h" : YES 00:02:10.592 Library fdt found: NO 00:02:10.592 Library execinfo found: NO 00:02:10.592 Has header "execinfo.h" : YES 00:02:10.592 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:10.592 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:10.592 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:10.592 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:10.592 Run-time dependency openssl found: YES 3.1.1 00:02:10.592 Run-time dependency libpcap found: YES 1.10.4 00:02:10.592 Has header "pcap.h" with dependency libpcap: YES 00:02:10.592 Compiler for C supports arguments -Wcast-qual: YES 00:02:10.592 Compiler for C supports arguments -Wdeprecated: YES 00:02:10.592 Compiler for C supports arguments -Wformat: YES 00:02:10.592 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:10.592 Compiler for C supports arguments -Wformat-security: YES 00:02:10.592 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:10.592 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:10.592 Compiler for C supports arguments -Wnested-externs: YES 00:02:10.592 Compiler for C supports arguments -Wold-style-definition: YES 00:02:10.592 Compiler for C supports arguments -Wpointer-arith: YES 00:02:10.592 Compiler for C supports arguments -Wsign-compare: YES 00:02:10.592 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:10.592 Compiler for C supports arguments -Wundef: YES 00:02:10.592 Compiler for C supports arguments -Wwrite-strings: YES 00:02:10.592 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:10.592 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:10.592 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:10.592 Program objdump found: YES (/usr/bin/objdump) 00:02:10.592 Compiler for C supports arguments -mavx512f: YES 00:02:10.592 Checking if "AVX512 checking" compiles: YES 00:02:10.592 Fetching value of define "__SSE4_2__" : 1 00:02:10.592 Fetching value of define "__AES__" : 1 00:02:10.592 Fetching value of define "__AVX__" : 1 00:02:10.592 Fetching value of define "__AVX2__" : 1 00:02:10.592 Fetching value of define "__AVX512BW__" : 1 00:02:10.592 Fetching value of define "__AVX512CD__" : 1 00:02:10.592 Fetching value of define "__AVX512DQ__" : 1 00:02:10.592 Fetching value of define "__AVX512F__" : 1 00:02:10.592 Fetching value of define "__AVX512VL__" : 1 00:02:10.592 Fetching value of define "__PCLMUL__" : 1 00:02:10.592 Fetching value of define "__RDRND__" : 1 00:02:10.592 Fetching value of define "__RDSEED__" : 1 00:02:10.592 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:10.592 Fetching value of define "__znver1__" : (undefined) 00:02:10.592 Fetching value of define "__znver2__" : (undefined) 00:02:10.592 Fetching value of define "__znver3__" : (undefined) 00:02:10.592 Fetching value of define "__znver4__" : (undefined) 00:02:10.592 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:10.592 Message: lib/log: Defining dependency "log" 00:02:10.592 Message: lib/kvargs: Defining dependency "kvargs" 00:02:10.592 Message: lib/telemetry: Defining dependency "telemetry" 00:02:10.592 Checking for function "getentropy" : NO 00:02:10.592 Message: lib/eal: Defining dependency "eal" 00:02:10.592 Message: lib/ring: Defining dependency "ring" 00:02:10.592 Message: lib/rcu: Defining dependency "rcu" 00:02:10.592 Message: lib/mempool: Defining dependency "mempool" 00:02:10.592 Message: lib/mbuf: Defining dependency "mbuf" 00:02:10.592 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:10.592 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:10.592 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:10.592 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:10.592 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:10.592 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:10.592 Compiler for C supports arguments -mpclmul: YES 00:02:10.592 Compiler for C supports arguments -maes: YES 00:02:10.592 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:10.592 Compiler for C supports arguments -mavx512bw: YES 00:02:10.592 Compiler for C supports arguments -mavx512dq: YES 00:02:10.592 Compiler for C supports arguments -mavx512vl: YES 00:02:10.592 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:10.592 Compiler for C supports arguments -mavx2: YES 00:02:10.592 Compiler for C supports arguments -mavx: YES 00:02:10.592 Message: lib/net: Defining dependency "net" 00:02:10.592 Message: lib/meter: Defining dependency "meter" 00:02:10.592 Message: lib/ethdev: Defining dependency "ethdev" 00:02:10.592 Message: lib/pci: Defining dependency "pci" 00:02:10.592 Message: lib/cmdline: Defining dependency "cmdline" 00:02:10.592 Message: lib/hash: Defining dependency "hash" 00:02:10.592 Message: lib/timer: Defining dependency "timer" 00:02:10.592 Message: lib/compressdev: Defining dependency "compressdev" 00:02:10.592 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:10.592 Message: lib/dmadev: Defining dependency "dmadev" 00:02:10.592 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:10.592 Message: lib/power: Defining dependency "power" 00:02:10.592 Message: lib/reorder: Defining dependency "reorder" 00:02:10.592 Message: lib/security: Defining dependency "security" 00:02:10.592 Has header "linux/userfaultfd.h" : YES 00:02:10.592 Has header "linux/vduse.h" : YES 00:02:10.592 Message: lib/vhost: Defining dependency "vhost" 00:02:10.592 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:10.592 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:10.592 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:10.592 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:10.592 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:10.592 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:10.592 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:10.592 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:10.592 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:10.592 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:10.592 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:10.592 Configuring doxy-api-html.conf using configuration 00:02:10.592 Configuring doxy-api-man.conf using configuration 00:02:10.592 Program mandb found: YES (/usr/bin/mandb) 00:02:10.592 Program sphinx-build found: NO 00:02:10.592 Configuring rte_build_config.h using configuration 00:02:10.592 Message: 00:02:10.592 ================= 00:02:10.592 Applications Enabled 00:02:10.592 ================= 00:02:10.592 00:02:10.592 apps: 00:02:10.592 00:02:10.592 00:02:10.592 Message: 00:02:10.592 ================= 00:02:10.592 Libraries Enabled 00:02:10.592 ================= 00:02:10.592 00:02:10.592 libs: 00:02:10.592 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:10.592 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:10.592 cryptodev, dmadev, power, reorder, security, vhost, 00:02:10.592 00:02:10.592 Message: 00:02:10.592 =============== 00:02:10.592 Drivers Enabled 00:02:10.592 =============== 00:02:10.592 00:02:10.592 common: 00:02:10.592 00:02:10.592 bus: 00:02:10.592 pci, vdev, 00:02:10.592 mempool: 00:02:10.592 ring, 00:02:10.592 dma: 00:02:10.592 00:02:10.592 net: 00:02:10.592 00:02:10.592 crypto: 00:02:10.592 00:02:10.592 compress: 00:02:10.592 00:02:10.592 vdpa: 00:02:10.592 00:02:10.592 00:02:10.592 Message: 00:02:10.592 ================= 00:02:10.592 Content Skipped 00:02:10.592 ================= 00:02:10.592 00:02:10.592 apps: 00:02:10.592 dumpcap: explicitly disabled via build config 00:02:10.592 graph: explicitly disabled via build config 00:02:10.593 pdump: explicitly disabled via build config 00:02:10.593 proc-info: explicitly disabled via build config 00:02:10.593 test-acl: explicitly disabled via build config 00:02:10.593 test-bbdev: explicitly disabled via build config 00:02:10.593 test-cmdline: explicitly disabled via build config 00:02:10.593 test-compress-perf: explicitly disabled via build config 00:02:10.593 test-crypto-perf: explicitly disabled via build config 00:02:10.593 test-dma-perf: explicitly disabled via build config 00:02:10.593 test-eventdev: explicitly disabled via build config 00:02:10.593 test-fib: explicitly disabled via build config 00:02:10.593 test-flow-perf: explicitly disabled via build config 00:02:10.593 test-gpudev: explicitly disabled via build config 00:02:10.593 test-mldev: explicitly disabled via build config 00:02:10.593 test-pipeline: explicitly disabled via build config 00:02:10.593 test-pmd: explicitly disabled via build config 00:02:10.593 test-regex: explicitly disabled via build config 00:02:10.593 test-sad: explicitly disabled via build config 00:02:10.593 test-security-perf: explicitly disabled via build config 00:02:10.593 00:02:10.593 libs: 00:02:10.593 metrics: explicitly disabled via build config 00:02:10.593 acl: explicitly disabled via build config 00:02:10.593 bbdev: explicitly disabled via build config 00:02:10.593 bitratestats: explicitly disabled via build config 00:02:10.593 bpf: explicitly disabled via build config 00:02:10.593 cfgfile: explicitly disabled via build config 00:02:10.593 distributor: explicitly disabled via build config 00:02:10.593 efd: explicitly disabled via build config 00:02:10.593 eventdev: explicitly disabled via build config 00:02:10.593 dispatcher: explicitly disabled via build config 00:02:10.593 gpudev: explicitly disabled via build config 00:02:10.593 gro: explicitly disabled via build config 00:02:10.593 gso: explicitly disabled via build config 00:02:10.593 ip_frag: explicitly disabled via build config 00:02:10.593 jobstats: explicitly disabled via build config 00:02:10.593 latencystats: explicitly disabled via build config 00:02:10.593 lpm: explicitly disabled via build config 00:02:10.593 member: explicitly disabled via build config 00:02:10.593 pcapng: explicitly disabled via build config 00:02:10.593 rawdev: explicitly disabled via build config 00:02:10.593 regexdev: explicitly disabled via build config 00:02:10.593 mldev: explicitly disabled via build config 00:02:10.593 rib: explicitly disabled via build config 00:02:10.593 sched: explicitly disabled via build config 00:02:10.593 stack: explicitly disabled via build config 00:02:10.593 ipsec: explicitly disabled via build config 00:02:10.593 pdcp: explicitly disabled via build config 00:02:10.593 fib: explicitly disabled via build config 00:02:10.593 port: explicitly disabled via build config 00:02:10.593 pdump: explicitly disabled via build config 00:02:10.593 table: explicitly disabled via build config 00:02:10.593 pipeline: explicitly disabled via build config 00:02:10.593 graph: explicitly disabled via build config 00:02:10.593 node: explicitly disabled via build config 00:02:10.593 00:02:10.593 drivers: 00:02:10.593 common/cpt: not in enabled drivers build config 00:02:10.593 common/dpaax: not in enabled drivers build config 00:02:10.593 common/iavf: not in enabled drivers build config 00:02:10.593 common/idpf: not in enabled drivers build config 00:02:10.593 common/mvep: not in enabled drivers build config 00:02:10.593 common/octeontx: not in enabled drivers build config 00:02:10.593 bus/auxiliary: not in enabled drivers build config 00:02:10.593 bus/cdx: not in enabled drivers build config 00:02:10.593 bus/dpaa: not in enabled drivers build config 00:02:10.593 bus/fslmc: not in enabled drivers build config 00:02:10.593 bus/ifpga: not in enabled drivers build config 00:02:10.593 bus/platform: not in enabled drivers build config 00:02:10.593 bus/vmbus: not in enabled drivers build config 00:02:10.593 common/cnxk: not in enabled drivers build config 00:02:10.593 common/mlx5: not in enabled drivers build config 00:02:10.593 common/nfp: not in enabled drivers build config 00:02:10.593 common/qat: not in enabled drivers build config 00:02:10.593 common/sfc_efx: not in enabled drivers build config 00:02:10.593 mempool/bucket: not in enabled drivers build config 00:02:10.593 mempool/cnxk: not in enabled drivers build config 00:02:10.593 mempool/dpaa: not in enabled drivers build config 00:02:10.593 mempool/dpaa2: not in enabled drivers build config 00:02:10.593 mempool/octeontx: not in enabled drivers build config 00:02:10.593 mempool/stack: not in enabled drivers build config 00:02:10.593 dma/cnxk: not in enabled drivers build config 00:02:10.593 dma/dpaa: not in enabled drivers build config 00:02:10.593 dma/dpaa2: not in enabled drivers build config 00:02:10.593 dma/hisilicon: not in enabled drivers build config 00:02:10.593 dma/idxd: not in enabled drivers build config 00:02:10.593 dma/ioat: not in enabled drivers build config 00:02:10.593 dma/skeleton: not in enabled drivers build config 00:02:10.593 net/af_packet: not in enabled drivers build config 00:02:10.593 net/af_xdp: not in enabled drivers build config 00:02:10.593 net/ark: not in enabled drivers build config 00:02:10.593 net/atlantic: not in enabled drivers build config 00:02:10.593 net/avp: not in enabled drivers build config 00:02:10.593 net/axgbe: not in enabled drivers build config 00:02:10.593 net/bnx2x: not in enabled drivers build config 00:02:10.593 net/bnxt: not in enabled drivers build config 00:02:10.593 net/bonding: not in enabled drivers build config 00:02:10.593 net/cnxk: not in enabled drivers build config 00:02:10.593 net/cpfl: not in enabled drivers build config 00:02:10.593 net/cxgbe: not in enabled drivers build config 00:02:10.593 net/dpaa: not in enabled drivers build config 00:02:10.593 net/dpaa2: not in enabled drivers build config 00:02:10.593 net/e1000: not in enabled drivers build config 00:02:10.593 net/ena: not in enabled drivers build config 00:02:10.593 net/enetc: not in enabled drivers build config 00:02:10.593 net/enetfec: not in enabled drivers build config 00:02:10.593 net/enic: not in enabled drivers build config 00:02:10.593 net/failsafe: not in enabled drivers build config 00:02:10.593 net/fm10k: not in enabled drivers build config 00:02:10.593 net/gve: not in enabled drivers build config 00:02:10.593 net/hinic: not in enabled drivers build config 00:02:10.593 net/hns3: not in enabled drivers build config 00:02:10.593 net/i40e: not in enabled drivers build config 00:02:10.593 net/iavf: not in enabled drivers build config 00:02:10.593 net/ice: not in enabled drivers build config 00:02:10.593 net/idpf: not in enabled drivers build config 00:02:10.593 net/igc: not in enabled drivers build config 00:02:10.593 net/ionic: not in enabled drivers build config 00:02:10.593 net/ipn3ke: not in enabled drivers build config 00:02:10.593 net/ixgbe: not in enabled drivers build config 00:02:10.593 net/mana: not in enabled drivers build config 00:02:10.593 net/memif: not in enabled drivers build config 00:02:10.593 net/mlx4: not in enabled drivers build config 00:02:10.593 net/mlx5: not in enabled drivers build config 00:02:10.593 net/mvneta: not in enabled drivers build config 00:02:10.593 net/mvpp2: not in enabled drivers build config 00:02:10.593 net/netvsc: not in enabled drivers build config 00:02:10.593 net/nfb: not in enabled drivers build config 00:02:10.593 net/nfp: not in enabled drivers build config 00:02:10.593 net/ngbe: not in enabled drivers build config 00:02:10.593 net/null: not in enabled drivers build config 00:02:10.593 net/octeontx: not in enabled drivers build config 00:02:10.593 net/octeon_ep: not in enabled drivers build config 00:02:10.593 net/pcap: not in enabled drivers build config 00:02:10.593 net/pfe: not in enabled drivers build config 00:02:10.593 net/qede: not in enabled drivers build config 00:02:10.593 net/ring: not in enabled drivers build config 00:02:10.593 net/sfc: not in enabled drivers build config 00:02:10.593 net/softnic: not in enabled drivers build config 00:02:10.593 net/tap: not in enabled drivers build config 00:02:10.593 net/thunderx: not in enabled drivers build config 00:02:10.593 net/txgbe: not in enabled drivers build config 00:02:10.593 net/vdev_netvsc: not in enabled drivers build config 00:02:10.593 net/vhost: not in enabled drivers build config 00:02:10.593 net/virtio: not in enabled drivers build config 00:02:10.593 net/vmxnet3: not in enabled drivers build config 00:02:10.593 raw/*: missing internal dependency, "rawdev" 00:02:10.593 crypto/armv8: not in enabled drivers build config 00:02:10.593 crypto/bcmfs: not in enabled drivers build config 00:02:10.593 crypto/caam_jr: not in enabled drivers build config 00:02:10.593 crypto/ccp: not in enabled drivers build config 00:02:10.593 crypto/cnxk: not in enabled drivers build config 00:02:10.593 crypto/dpaa_sec: not in enabled drivers build config 00:02:10.593 crypto/dpaa2_sec: not in enabled drivers build config 00:02:10.593 crypto/ipsec_mb: not in enabled drivers build config 00:02:10.593 crypto/mlx5: not in enabled drivers build config 00:02:10.593 crypto/mvsam: not in enabled drivers build config 00:02:10.593 crypto/nitrox: not in enabled drivers build config 00:02:10.593 crypto/null: not in enabled drivers build config 00:02:10.593 crypto/octeontx: not in enabled drivers build config 00:02:10.593 crypto/openssl: not in enabled drivers build config 00:02:10.593 crypto/scheduler: not in enabled drivers build config 00:02:10.593 crypto/uadk: not in enabled drivers build config 00:02:10.593 crypto/virtio: not in enabled drivers build config 00:02:10.593 compress/isal: not in enabled drivers build config 00:02:10.593 compress/mlx5: not in enabled drivers build config 00:02:10.593 compress/octeontx: not in enabled drivers build config 00:02:10.593 compress/zlib: not in enabled drivers build config 00:02:10.593 regex/*: missing internal dependency, "regexdev" 00:02:10.593 ml/*: missing internal dependency, "mldev" 00:02:10.593 vdpa/ifc: not in enabled drivers build config 00:02:10.593 vdpa/mlx5: not in enabled drivers build config 00:02:10.593 vdpa/nfp: not in enabled drivers build config 00:02:10.593 vdpa/sfc: not in enabled drivers build config 00:02:10.593 event/*: missing internal dependency, "eventdev" 00:02:10.593 baseband/*: missing internal dependency, "bbdev" 00:02:10.593 gpu/*: missing internal dependency, "gpudev" 00:02:10.593 00:02:10.593 00:02:10.593 Build targets in project: 85 00:02:10.593 00:02:10.594 DPDK 23.11.0 00:02:10.594 00:02:10.594 User defined options 00:02:10.594 buildtype : debug 00:02:10.594 default_library : static 00:02:10.594 libdir : lib 00:02:10.594 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:02:10.594 c_args : -fPIC -Werror 00:02:10.594 c_link_args : 00:02:10.594 cpu_instruction_set: native 00:02:10.594 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:10.594 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:02:10.594 enable_docs : false 00:02:10.594 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:10.594 enable_kmods : false 00:02:10.594 tests : false 00:02:10.594 00:02:10.594 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:10.859 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:02:10.859 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:10.859 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:10.859 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:10.859 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:10.859 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:10.859 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:10.859 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:10.859 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:10.859 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:10.859 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:10.859 [11/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:10.859 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:10.859 [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.859 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:10.859 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:10.859 [16/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.859 [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.859 [18/265] Linking static target lib/librte_kvargs.a 00:02:10.859 [19/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.859 [20/265] Linking static target lib/librte_log.a 00:02:10.859 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:10.859 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.859 [23/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.859 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.859 [25/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.859 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:10.859 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:10.859 [28/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.859 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.859 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:10.859 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:10.859 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.859 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:10.859 [34/265] Linking static target lib/librte_pci.a 00:02:10.859 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:10.859 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.859 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.859 [38/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.859 [39/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:10.859 [40/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.121 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.121 [42/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.121 [43/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.381 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:11.381 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:11.381 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:11.381 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:11.381 [48/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:11.381 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:11.382 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:11.382 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:11.382 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:11.382 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:11.382 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:11.382 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:11.382 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:11.382 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:11.382 [58/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.382 [59/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:11.382 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:11.382 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:11.382 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:11.382 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:11.382 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:11.382 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:11.382 [66/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.382 [67/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.382 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:11.382 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:11.382 [70/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:11.382 [71/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:11.382 [72/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:11.382 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:11.382 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:11.382 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:11.382 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:11.382 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:11.382 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:11.382 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:11.382 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:11.382 [81/265] Linking static target lib/librte_telemetry.a 00:02:11.382 [82/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:11.382 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:11.382 [84/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:11.382 [85/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:11.382 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.382 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:11.382 [88/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.382 [89/265] Linking static target lib/librte_meter.a 00:02:11.382 [90/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:11.382 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.382 [92/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:11.382 [93/265] Linking static target lib/librte_ring.a 00:02:11.382 [94/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.382 [95/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:11.382 [96/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:11.382 [97/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:11.382 [98/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:11.382 [99/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.382 [100/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.382 [101/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:11.382 [102/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.382 [103/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.382 [104/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.382 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.382 [106/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.382 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:11.382 [108/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.382 [109/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.382 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.382 [111/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:11.382 [112/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.382 [113/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:11.382 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.382 [115/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:11.382 [116/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.382 [117/265] Linking static target lib/librte_cmdline.a 00:02:11.382 [118/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.382 [119/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.382 [120/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.382 [121/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:11.382 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:11.382 [123/265] Linking static target lib/librte_timer.a 00:02:11.382 [124/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.382 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.382 [126/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.382 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:11.382 [128/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.382 [129/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.382 [130/265] Linking static target lib/librte_dmadev.a 00:02:11.382 [131/265] Linking static target lib/librte_eal.a 00:02:11.382 [132/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:11.382 [133/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.382 [134/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.382 [135/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.382 [136/265] Linking static target lib/librte_rcu.a 00:02:11.382 [137/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.382 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:11.641 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:11.641 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.641 [141/265] Linking target lib/librte_log.so.24.0 00:02:11.641 [142/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.641 [143/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:11.641 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:11.641 [145/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:11.641 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:11.641 [147/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.641 [148/265] Linking static target lib/librte_compressdev.a 00:02:11.641 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:11.641 [150/265] Linking static target lib/librte_net.a 00:02:11.641 [151/265] Linking static target lib/librte_mempool.a 00:02:11.641 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.641 [153/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:11.641 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.641 [155/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:11.641 [156/265] Linking static target lib/librte_reorder.a 00:02:11.641 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:11.641 [158/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:11.641 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:11.641 [160/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:11.641 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.641 [162/265] Linking static target lib/librte_security.a 00:02:11.641 [163/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:11.641 [164/265] Linking static target lib/librte_power.a 00:02:11.641 [165/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:11.641 [166/265] Linking static target lib/librte_mbuf.a 00:02:11.641 [167/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.641 [168/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.641 [169/265] Linking static target lib/librte_hash.a 00:02:11.641 [170/265] Linking target lib/librte_kvargs.so.24.0 00:02:11.641 [171/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.641 [172/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.641 [173/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:11.641 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.641 [175/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.641 [176/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.900 [177/265] Linking static target lib/librte_cryptodev.a 00:02:11.900 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.900 [179/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.900 [180/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.900 [181/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.900 [182/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.900 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.900 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:11.900 [185/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:11.900 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:11.900 [187/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.900 [188/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.900 [189/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:11.900 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:11.900 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.900 [192/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.900 [193/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.900 [194/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.900 [195/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.900 [196/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.900 [197/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.160 [198/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.160 [199/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.160 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.160 [201/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.160 [202/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.160 [203/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.160 [204/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.160 [205/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.160 [206/265] Linking static target lib/librte_ethdev.a 00:02:12.160 [207/265] Linking static target drivers/librte_bus_vdev.a 00:02:12.160 [208/265] Linking static target drivers/librte_mempool_ring.a 00:02:12.160 [209/265] Linking target lib/librte_telemetry.so.24.0 00:02:12.160 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:12.160 [211/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.160 [212/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.160 [213/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:12.160 [214/265] Linking static target drivers/librte_bus_pci.a 00:02:12.160 [215/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:12.160 [216/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.419 [217/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.419 [218/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.419 [219/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.682 [220/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.682 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.682 [222/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.682 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.942 [224/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:12.942 [225/265] Linking static target lib/librte_vhost.a 00:02:12.942 [226/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.880 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.260 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.538 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.946 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.946 [231/265] Linking target lib/librte_eal.so.24.0 00:02:24.206 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:24.206 [233/265] Linking target lib/librte_ring.so.24.0 00:02:24.206 [234/265] Linking target lib/librte_meter.so.24.0 00:02:24.206 [235/265] Linking target lib/librte_timer.so.24.0 00:02:24.206 [236/265] Linking target lib/librte_pci.so.24.0 00:02:24.206 [237/265] Linking target lib/librte_dmadev.so.24.0 00:02:24.206 [238/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:24.466 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:24.466 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:24.466 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:24.466 [242/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:24.466 [243/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:24.466 [244/265] Linking target lib/librte_rcu.so.24.0 00:02:24.466 [245/265] Linking target lib/librte_mempool.so.24.0 00:02:24.466 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:24.466 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:24.466 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:24.725 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:24.725 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:24.725 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:24.984 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:24.984 [253/265] Linking target lib/librte_compressdev.so.24.0 00:02:24.984 [254/265] Linking target lib/librte_net.so.24.0 00:02:24.984 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:24.984 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:24.984 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:24.984 [258/265] Linking target lib/librte_security.so.24.0 00:02:24.984 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:24.984 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:24.984 [261/265] Linking target lib/librte_hash.so.24.0 00:02:25.244 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:25.244 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:25.244 [264/265] Linking target lib/librte_power.so.24.0 00:02:25.244 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:25.244 INFO: autodetecting backend as ninja 00:02:25.244 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:26.182 CC lib/ut_mock/mock.o 00:02:26.182 CC lib/log/log.o 00:02:26.182 CC lib/log/log_deprecated.o 00:02:26.182 CC lib/log/log_flags.o 00:02:26.182 CC lib/ut/ut.o 00:02:26.442 LIB libspdk_ut_mock.a 00:02:26.442 LIB libspdk_log.a 00:02:26.442 LIB libspdk_ut.a 00:02:26.701 CC lib/dma/dma.o 00:02:26.701 CC lib/ioat/ioat.o 00:02:26.701 CC lib/util/base64.o 00:02:26.701 CC lib/util/bit_array.o 00:02:26.701 CC lib/util/crc32.o 00:02:26.701 CC lib/util/cpuset.o 00:02:26.701 CC lib/util/crc16.o 00:02:26.701 CC lib/util/crc32c.o 00:02:26.701 CXX lib/trace_parser/trace.o 00:02:26.701 CC lib/util/dif.o 00:02:26.701 CC lib/util/crc32_ieee.o 00:02:26.701 CC lib/util/crc64.o 00:02:26.701 CC lib/util/fd.o 00:02:26.701 CC lib/util/iov.o 00:02:26.701 CC lib/util/file.o 00:02:26.701 CC lib/util/hexlify.o 00:02:26.701 CC lib/util/math.o 00:02:26.701 CC lib/util/string.o 00:02:26.701 CC lib/util/pipe.o 00:02:26.701 CC lib/util/strerror_tls.o 00:02:26.701 CC lib/util/uuid.o 00:02:26.701 CC lib/util/fd_group.o 00:02:26.701 CC lib/util/xor.o 00:02:26.701 CC lib/util/zipf.o 00:02:26.701 CC lib/vfio_user/host/vfio_user_pci.o 00:02:26.701 CC lib/vfio_user/host/vfio_user.o 00:02:26.960 LIB libspdk_dma.a 00:02:26.960 LIB libspdk_ioat.a 00:02:26.960 LIB libspdk_vfio_user.a 00:02:26.960 LIB libspdk_util.a 00:02:27.220 LIB libspdk_trace_parser.a 00:02:27.220 CC lib/idxd/idxd.o 00:02:27.220 CC lib/idxd/idxd_user.o 00:02:27.220 CC lib/idxd/idxd_kernel.o 00:02:27.220 CC lib/conf/conf.o 00:02:27.220 CC lib/rdma/common.o 00:02:27.220 CC lib/rdma/rdma_verbs.o 00:02:27.220 CC lib/env_dpdk/env.o 00:02:27.220 CC lib/env_dpdk/memory.o 00:02:27.220 CC lib/vmd/vmd.o 00:02:27.220 CC lib/vmd/led.o 00:02:27.220 CC lib/env_dpdk/pci.o 00:02:27.220 CC lib/env_dpdk/init.o 00:02:27.220 CC lib/env_dpdk/threads.o 00:02:27.220 CC lib/json/json_parse.o 00:02:27.220 CC lib/env_dpdk/pci_ioat.o 00:02:27.220 CC lib/env_dpdk/pci_virtio.o 00:02:27.220 CC lib/json/json_util.o 00:02:27.480 CC lib/env_dpdk/pci_vmd.o 00:02:27.480 CC lib/json/json_write.o 00:02:27.480 CC lib/env_dpdk/pci_idxd.o 00:02:27.480 CC lib/env_dpdk/pci_event.o 00:02:27.480 CC lib/env_dpdk/sigbus_handler.o 00:02:27.480 CC lib/env_dpdk/pci_dpdk.o 00:02:27.480 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:27.480 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:27.480 LIB libspdk_conf.a 00:02:27.480 LIB libspdk_rdma.a 00:02:27.480 LIB libspdk_json.a 00:02:27.740 LIB libspdk_idxd.a 00:02:27.740 LIB libspdk_vmd.a 00:02:27.740 CC lib/jsonrpc/jsonrpc_server.o 00:02:27.740 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:27.740 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:27.740 CC lib/jsonrpc/jsonrpc_client.o 00:02:28.000 LIB libspdk_jsonrpc.a 00:02:28.260 LIB libspdk_env_dpdk.a 00:02:28.260 CC lib/rpc/rpc.o 00:02:28.520 LIB libspdk_rpc.a 00:02:28.780 CC lib/sock/sock.o 00:02:28.780 CC lib/sock/sock_rpc.o 00:02:28.780 CC lib/trace/trace.o 00:02:28.780 CC lib/trace/trace_flags.o 00:02:28.780 CC lib/trace/trace_rpc.o 00:02:28.780 CC lib/notify/notify_rpc.o 00:02:28.780 CC lib/notify/notify.o 00:02:28.780 LIB libspdk_notify.a 00:02:28.780 LIB libspdk_trace.a 00:02:29.040 LIB libspdk_sock.a 00:02:29.299 CC lib/thread/thread.o 00:02:29.299 CC lib/thread/iobuf.o 00:02:29.299 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.299 CC lib/nvme/nvme_ctrlr.o 00:02:29.299 CC lib/nvme/nvme_ns.o 00:02:29.299 CC lib/nvme/nvme_fabric.o 00:02:29.299 CC lib/nvme/nvme_ns_cmd.o 00:02:29.299 CC lib/nvme/nvme_pcie.o 00:02:29.299 CC lib/nvme/nvme_pcie_common.o 00:02:29.299 CC lib/nvme/nvme_qpair.o 00:02:29.299 CC lib/nvme/nvme_transport.o 00:02:29.299 CC lib/nvme/nvme.o 00:02:29.299 CC lib/nvme/nvme_quirks.o 00:02:29.299 CC lib/nvme/nvme_discovery.o 00:02:29.299 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.299 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.299 CC lib/nvme/nvme_tcp.o 00:02:29.299 CC lib/nvme/nvme_opal.o 00:02:29.299 CC lib/nvme/nvme_io_msg.o 00:02:29.299 CC lib/nvme/nvme_poll_group.o 00:02:29.299 CC lib/nvme/nvme_zns.o 00:02:29.299 CC lib/nvme/nvme_cuse.o 00:02:29.299 CC lib/nvme/nvme_vfio_user.o 00:02:29.299 CC lib/nvme/nvme_rdma.o 00:02:29.868 LIB libspdk_thread.a 00:02:30.128 CC lib/blob/request.o 00:02:30.128 CC lib/blob/blobstore.o 00:02:30.128 CC lib/blob/zeroes.o 00:02:30.128 CC lib/blob/blob_bs_dev.o 00:02:30.128 CC lib/init/subsystem.o 00:02:30.128 CC lib/init/json_config.o 00:02:30.128 CC lib/virtio/virtio_vhost_user.o 00:02:30.128 CC lib/init/subsystem_rpc.o 00:02:30.128 CC lib/virtio/virtio.o 00:02:30.128 CC lib/init/rpc.o 00:02:30.128 CC lib/virtio/virtio_vfio_user.o 00:02:30.128 CC lib/virtio/virtio_pci.o 00:02:30.128 CC lib/accel/accel.o 00:02:30.128 CC lib/accel/accel_rpc.o 00:02:30.128 CC lib/accel/accel_sw.o 00:02:30.128 CC lib/vfu_tgt/tgt_endpoint.o 00:02:30.128 CC lib/vfu_tgt/tgt_rpc.o 00:02:30.387 LIB libspdk_init.a 00:02:30.387 LIB libspdk_virtio.a 00:02:30.387 LIB libspdk_vfu_tgt.a 00:02:30.387 LIB libspdk_nvme.a 00:02:30.646 CC lib/event/app.o 00:02:30.646 CC lib/event/reactor.o 00:02:30.646 CC lib/event/scheduler_static.o 00:02:30.646 CC lib/event/log_rpc.o 00:02:30.646 CC lib/event/app_rpc.o 00:02:30.905 LIB libspdk_accel.a 00:02:30.905 LIB libspdk_event.a 00:02:31.165 CC lib/bdev/bdev.o 00:02:31.165 CC lib/bdev/part.o 00:02:31.165 CC lib/bdev/bdev_rpc.o 00:02:31.165 CC lib/bdev/bdev_zone.o 00:02:31.165 CC lib/bdev/scsi_nvme.o 00:02:31.733 LIB libspdk_blob.a 00:02:31.992 CC lib/lvol/lvol.o 00:02:31.992 CC lib/blobfs/blobfs.o 00:02:31.992 CC lib/blobfs/tree.o 00:02:32.560 LIB libspdk_lvol.a 00:02:32.560 LIB libspdk_blobfs.a 00:02:32.818 LIB libspdk_bdev.a 00:02:33.076 CC lib/nbd/nbd.o 00:02:33.076 CC lib/nbd/nbd_rpc.o 00:02:33.076 CC lib/ublk/ublk.o 00:02:33.076 CC lib/ublk/ublk_rpc.o 00:02:33.076 CC lib/nvmf/ctrlr.o 00:02:33.076 CC lib/nvmf/ctrlr_discovery.o 00:02:33.076 CC lib/nvmf/ctrlr_bdev.o 00:02:33.076 CC lib/nvmf/nvmf_rpc.o 00:02:33.076 CC lib/nvmf/subsystem.o 00:02:33.076 CC lib/nvmf/nvmf.o 00:02:33.076 CC lib/nvmf/transport.o 00:02:33.076 CC lib/nvmf/tcp.o 00:02:33.076 CC lib/scsi/dev.o 00:02:33.076 CC lib/nvmf/rdma.o 00:02:33.076 CC lib/scsi/lun.o 00:02:33.076 CC lib/nvmf/vfio_user.o 00:02:33.076 CC lib/scsi/port.o 00:02:33.076 CC lib/scsi/scsi.o 00:02:33.076 CC lib/scsi/scsi_bdev.o 00:02:33.076 CC lib/ftl/ftl_core.o 00:02:33.076 CC lib/scsi/scsi_pr.o 00:02:33.076 CC lib/ftl/ftl_init.o 00:02:33.076 CC lib/scsi/scsi_rpc.o 00:02:33.076 CC lib/ftl/ftl_layout.o 00:02:33.076 CC lib/ftl/ftl_debug.o 00:02:33.076 CC lib/scsi/task.o 00:02:33.076 CC lib/ftl/ftl_io.o 00:02:33.076 CC lib/ftl/ftl_sb.o 00:02:33.076 CC lib/ftl/ftl_l2p.o 00:02:33.076 CC lib/ftl/ftl_l2p_flat.o 00:02:33.076 CC lib/ftl/ftl_nv_cache.o 00:02:33.076 CC lib/ftl/ftl_band.o 00:02:33.077 CC lib/ftl/ftl_band_ops.o 00:02:33.077 CC lib/ftl/ftl_writer.o 00:02:33.077 CC lib/ftl/ftl_rq.o 00:02:33.077 CC lib/ftl/ftl_reloc.o 00:02:33.077 CC lib/ftl/ftl_l2p_cache.o 00:02:33.077 CC lib/ftl/ftl_p2l.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.077 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.077 CC lib/ftl/utils/ftl_conf.o 00:02:33.077 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.077 CC lib/ftl/utils/ftl_md.o 00:02:33.077 CC lib/ftl/utils/ftl_mempool.o 00:02:33.077 CC lib/ftl/utils/ftl_property.o 00:02:33.077 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:33.077 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:33.077 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:33.077 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:33.077 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:33.077 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:33.077 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:33.077 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:33.077 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:33.077 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:33.077 CC lib/ftl/base/ftl_base_dev.o 00:02:33.077 CC lib/ftl/base/ftl_base_bdev.o 00:02:33.077 CC lib/ftl/ftl_trace.o 00:02:33.335 LIB libspdk_nbd.a 00:02:33.594 LIB libspdk_ublk.a 00:02:33.594 LIB libspdk_scsi.a 00:02:33.594 LIB libspdk_ftl.a 00:02:33.854 CC lib/vhost/vhost_scsi.o 00:02:33.854 CC lib/vhost/vhost_rpc.o 00:02:33.854 CC lib/vhost/vhost.o 00:02:33.854 CC lib/vhost/vhost_blk.o 00:02:33.854 CC lib/vhost/rte_vhost_user.o 00:02:33.854 CC lib/iscsi/init_grp.o 00:02:33.854 CC lib/iscsi/conn.o 00:02:33.854 CC lib/iscsi/iscsi.o 00:02:33.854 CC lib/iscsi/param.o 00:02:33.854 CC lib/iscsi/md5.o 00:02:33.854 CC lib/iscsi/portal_grp.o 00:02:33.854 CC lib/iscsi/tgt_node.o 00:02:33.854 CC lib/iscsi/iscsi_subsystem.o 00:02:33.854 CC lib/iscsi/iscsi_rpc.o 00:02:33.854 CC lib/iscsi/task.o 00:02:34.113 LIB libspdk_nvmf.a 00:02:34.372 LIB libspdk_vhost.a 00:02:34.631 LIB libspdk_iscsi.a 00:02:35.198 CC module/vfu_device/vfu_virtio.o 00:02:35.198 CC module/vfu_device/vfu_virtio_scsi.o 00:02:35.198 CC module/vfu_device/vfu_virtio_blk.o 00:02:35.198 CC module/vfu_device/vfu_virtio_rpc.o 00:02:35.198 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.198 CC module/accel/dsa/accel_dsa_rpc.o 00:02:35.198 CC module/accel/dsa/accel_dsa.o 00:02:35.198 CC module/scheduler/gscheduler/gscheduler.o 00:02:35.198 CC module/accel/ioat/accel_ioat.o 00:02:35.198 CC module/accel/ioat/accel_ioat_rpc.o 00:02:35.199 CC module/sock/posix/posix.o 00:02:35.199 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:35.199 CC module/accel/iaa/accel_iaa.o 00:02:35.199 CC module/accel/iaa/accel_iaa_rpc.o 00:02:35.199 CC module/accel/error/accel_error.o 00:02:35.199 LIB libspdk_env_dpdk_rpc.a 00:02:35.199 CC module/accel/error/accel_error_rpc.o 00:02:35.199 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:35.199 CC module/blob/bdev/blob_bdev.o 00:02:35.199 LIB libspdk_scheduler_gscheduler.a 00:02:35.199 LIB libspdk_scheduler_dpdk_governor.a 00:02:35.199 LIB libspdk_accel_error.a 00:02:35.199 LIB libspdk_accel_ioat.a 00:02:35.199 LIB libspdk_scheduler_dynamic.a 00:02:35.199 LIB libspdk_accel_iaa.a 00:02:35.199 LIB libspdk_accel_dsa.a 00:02:35.457 LIB libspdk_blob_bdev.a 00:02:35.457 LIB libspdk_vfu_device.a 00:02:35.457 LIB libspdk_sock_posix.a 00:02:35.717 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:35.717 CC module/bdev/nvme/bdev_nvme.o 00:02:35.717 CC module/bdev/nvme/nvme_rpc.o 00:02:35.717 CC module/bdev/nvme/vbdev_opal.o 00:02:35.717 CC module/bdev/nvme/bdev_mdns_client.o 00:02:35.717 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:35.717 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:35.717 CC module/blobfs/bdev/blobfs_bdev.o 00:02:35.717 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:35.717 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:35.718 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:35.718 CC module/bdev/malloc/bdev_malloc.o 00:02:35.718 CC module/bdev/split/vbdev_split.o 00:02:35.718 CC module/bdev/split/vbdev_split_rpc.o 00:02:35.718 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:35.718 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:35.718 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.718 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:35.718 CC module/bdev/ftl/bdev_ftl.o 00:02:35.718 CC module/bdev/gpt/gpt.o 00:02:35.718 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:35.718 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:35.718 CC module/bdev/gpt/vbdev_gpt.o 00:02:35.718 CC module/bdev/lvol/vbdev_lvol.o 00:02:35.718 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:35.718 CC module/bdev/delay/vbdev_delay.o 00:02:35.718 CC module/bdev/error/vbdev_error.o 00:02:35.718 CC module/bdev/passthru/vbdev_passthru.o 00:02:35.718 CC module/bdev/null/bdev_null_rpc.o 00:02:35.718 CC module/bdev/error/vbdev_error_rpc.o 00:02:35.718 CC module/bdev/null/bdev_null.o 00:02:35.718 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:35.718 CC module/bdev/iscsi/bdev_iscsi.o 00:02:35.718 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:35.718 CC module/bdev/raid/bdev_raid_rpc.o 00:02:35.718 CC module/bdev/raid/bdev_raid.o 00:02:35.718 CC module/bdev/raid/bdev_raid_sb.o 00:02:35.718 CC module/bdev/raid/raid0.o 00:02:35.718 CC module/bdev/raid/raid1.o 00:02:35.718 CC module/bdev/raid/concat.o 00:02:35.718 CC module/bdev/aio/bdev_aio.o 00:02:35.718 CC module/bdev/aio/bdev_aio_rpc.o 00:02:35.977 LIB libspdk_blobfs_bdev.a 00:02:35.977 LIB libspdk_bdev_split.a 00:02:35.977 LIB libspdk_bdev_error.a 00:02:35.977 LIB libspdk_bdev_gpt.a 00:02:35.977 LIB libspdk_bdev_null.a 00:02:35.977 LIB libspdk_bdev_ftl.a 00:02:35.977 LIB libspdk_bdev_passthru.a 00:02:35.977 LIB libspdk_bdev_zone_block.a 00:02:35.977 LIB libspdk_bdev_aio.a 00:02:35.977 LIB libspdk_bdev_iscsi.a 00:02:35.977 LIB libspdk_bdev_malloc.a 00:02:35.977 LIB libspdk_bdev_delay.a 00:02:35.977 LIB libspdk_bdev_lvol.a 00:02:35.977 LIB libspdk_bdev_virtio.a 00:02:36.237 LIB libspdk_bdev_raid.a 00:02:36.805 LIB libspdk_bdev_nvme.a 00:02:37.372 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.372 CC module/event/subsystems/sock/sock.o 00:02:37.372 CC module/event/subsystems/vmd/vmd.o 00:02:37.372 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.372 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.372 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.373 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.373 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.631 LIB libspdk_event_scheduler.a 00:02:37.631 LIB libspdk_event_sock.a 00:02:37.631 LIB libspdk_event_vmd.a 00:02:37.631 LIB libspdk_event_vfu_tgt.a 00:02:37.631 LIB libspdk_event_vhost_blk.a 00:02:37.631 LIB libspdk_event_iobuf.a 00:02:37.891 CC module/event/subsystems/accel/accel.o 00:02:37.891 LIB libspdk_event_accel.a 00:02:38.150 CC module/event/subsystems/bdev/bdev.o 00:02:38.410 LIB libspdk_event_bdev.a 00:02:38.670 CC module/event/subsystems/scsi/scsi.o 00:02:38.670 CC module/event/subsystems/ublk/ublk.o 00:02:38.670 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:38.670 CC module/event/subsystems/nbd/nbd.o 00:02:38.670 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:38.670 LIB libspdk_event_ublk.a 00:02:38.670 LIB libspdk_event_scsi.a 00:02:38.670 LIB libspdk_event_nbd.a 00:02:38.929 LIB libspdk_event_nvmf.a 00:02:39.188 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.188 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.188 LIB libspdk_event_vhost_scsi.a 00:02:39.188 LIB libspdk_event_iscsi.a 00:02:39.447 TEST_HEADER include/spdk/accel.h 00:02:39.447 TEST_HEADER include/spdk/accel_module.h 00:02:39.447 CC app/spdk_nvme_identify/identify.o 00:02:39.447 TEST_HEADER include/spdk/assert.h 00:02:39.447 TEST_HEADER include/spdk/base64.h 00:02:39.447 TEST_HEADER include/spdk/bdev.h 00:02:39.447 TEST_HEADER include/spdk/barrier.h 00:02:39.447 TEST_HEADER include/spdk/bdev_module.h 00:02:39.447 TEST_HEADER include/spdk/bdev_zone.h 00:02:39.447 TEST_HEADER include/spdk/bit_array.h 00:02:39.447 TEST_HEADER include/spdk/bit_pool.h 00:02:39.447 CC app/spdk_nvme_discover/discovery_aer.o 00:02:39.447 TEST_HEADER include/spdk/blob_bdev.h 00:02:39.447 TEST_HEADER include/spdk/blobfs.h 00:02:39.447 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:39.447 TEST_HEADER include/spdk/blob.h 00:02:39.447 CC app/spdk_nvme_perf/perf.o 00:02:39.447 TEST_HEADER include/spdk/conf.h 00:02:39.447 TEST_HEADER include/spdk/config.h 00:02:39.447 TEST_HEADER include/spdk/cpuset.h 00:02:39.447 TEST_HEADER include/spdk/crc32.h 00:02:39.447 TEST_HEADER include/spdk/crc16.h 00:02:39.447 CC app/trace_record/trace_record.o 00:02:39.447 CXX app/trace/trace.o 00:02:39.447 TEST_HEADER include/spdk/dif.h 00:02:39.447 TEST_HEADER include/spdk/dma.h 00:02:39.447 TEST_HEADER include/spdk/env_dpdk.h 00:02:39.447 CC app/spdk_lspci/spdk_lspci.o 00:02:39.447 TEST_HEADER include/spdk/crc64.h 00:02:39.447 TEST_HEADER include/spdk/event.h 00:02:39.447 TEST_HEADER include/spdk/env.h 00:02:39.447 TEST_HEADER include/spdk/file.h 00:02:39.447 TEST_HEADER include/spdk/fd_group.h 00:02:39.447 TEST_HEADER include/spdk/endian.h 00:02:39.447 TEST_HEADER include/spdk/ftl.h 00:02:39.447 TEST_HEADER include/spdk/fd.h 00:02:39.447 TEST_HEADER include/spdk/gpt_spec.h 00:02:39.447 TEST_HEADER include/spdk/hexlify.h 00:02:39.447 TEST_HEADER include/spdk/histogram_data.h 00:02:39.447 TEST_HEADER include/spdk/idxd_spec.h 00:02:39.447 TEST_HEADER include/spdk/idxd.h 00:02:39.447 TEST_HEADER include/spdk/init.h 00:02:39.447 TEST_HEADER include/spdk/ioat.h 00:02:39.447 CC app/spdk_top/spdk_top.o 00:02:39.447 TEST_HEADER include/spdk/ioat_spec.h 00:02:39.447 TEST_HEADER include/spdk/json.h 00:02:39.447 TEST_HEADER include/spdk/iscsi_spec.h 00:02:39.447 TEST_HEADER include/spdk/jsonrpc.h 00:02:39.447 TEST_HEADER include/spdk/likely.h 00:02:39.447 TEST_HEADER include/spdk/lvol.h 00:02:39.447 TEST_HEADER include/spdk/log.h 00:02:39.447 TEST_HEADER include/spdk/memory.h 00:02:39.447 TEST_HEADER include/spdk/mmio.h 00:02:39.447 TEST_HEADER include/spdk/nbd.h 00:02:39.447 CC test/rpc_client/rpc_client_test.o 00:02:39.447 TEST_HEADER include/spdk/notify.h 00:02:39.447 TEST_HEADER include/spdk/nvme.h 00:02:39.447 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:39.447 TEST_HEADER include/spdk/nvme_intel.h 00:02:39.447 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:39.447 TEST_HEADER include/spdk/nvme_spec.h 00:02:39.447 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:39.447 TEST_HEADER include/spdk/nvme_zns.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:39.447 TEST_HEADER include/spdk/nvmf.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_spec.h 00:02:39.447 TEST_HEADER include/spdk/opal.h 00:02:39.447 TEST_HEADER include/spdk/nvmf_transport.h 00:02:39.447 TEST_HEADER include/spdk/opal_spec.h 00:02:39.447 TEST_HEADER include/spdk/pci_ids.h 00:02:39.447 TEST_HEADER include/spdk/queue.h 00:02:39.447 TEST_HEADER include/spdk/pipe.h 00:02:39.447 TEST_HEADER include/spdk/reduce.h 00:02:39.447 TEST_HEADER include/spdk/rpc.h 00:02:39.447 TEST_HEADER include/spdk/scsi.h 00:02:39.447 TEST_HEADER include/spdk/scheduler.h 00:02:39.447 TEST_HEADER include/spdk/scsi_spec.h 00:02:39.447 TEST_HEADER include/spdk/sock.h 00:02:39.447 TEST_HEADER include/spdk/stdinc.h 00:02:39.447 TEST_HEADER include/spdk/string.h 00:02:39.447 TEST_HEADER include/spdk/thread.h 00:02:39.447 TEST_HEADER include/spdk/trace.h 00:02:39.447 TEST_HEADER include/spdk/trace_parser.h 00:02:39.447 TEST_HEADER include/spdk/tree.h 00:02:39.447 TEST_HEADER include/spdk/ublk.h 00:02:39.447 TEST_HEADER include/spdk/util.h 00:02:39.447 TEST_HEADER include/spdk/uuid.h 00:02:39.447 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:39.447 TEST_HEADER include/spdk/version.h 00:02:39.447 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:39.447 TEST_HEADER include/spdk/vhost.h 00:02:39.447 TEST_HEADER include/spdk/vmd.h 00:02:39.447 TEST_HEADER include/spdk/xor.h 00:02:39.447 TEST_HEADER include/spdk/zipf.h 00:02:39.447 CC app/nvmf_tgt/nvmf_main.o 00:02:39.447 CXX test/cpp_headers/accel.o 00:02:39.447 CXX test/cpp_headers/accel_module.o 00:02:39.447 CXX test/cpp_headers/assert.o 00:02:39.447 CXX test/cpp_headers/barrier.o 00:02:39.447 CXX test/cpp_headers/base64.o 00:02:39.447 CXX test/cpp_headers/bdev_module.o 00:02:39.447 CXX test/cpp_headers/bdev.o 00:02:39.447 CC app/spdk_dd/spdk_dd.o 00:02:39.447 CXX test/cpp_headers/bdev_zone.o 00:02:39.447 CXX test/cpp_headers/bit_array.o 00:02:39.447 CXX test/cpp_headers/bit_pool.o 00:02:39.709 CXX test/cpp_headers/blob_bdev.o 00:02:39.709 CXX test/cpp_headers/blobfs_bdev.o 00:02:39.709 CXX test/cpp_headers/blobfs.o 00:02:39.709 CXX test/cpp_headers/conf.o 00:02:39.709 CXX test/cpp_headers/blob.o 00:02:39.709 CXX test/cpp_headers/config.o 00:02:39.709 CXX test/cpp_headers/cpuset.o 00:02:39.709 CXX test/cpp_headers/crc16.o 00:02:39.709 CXX test/cpp_headers/crc32.o 00:02:39.709 CXX test/cpp_headers/crc64.o 00:02:39.709 CC app/spdk_tgt/spdk_tgt.o 00:02:39.709 CXX test/cpp_headers/dif.o 00:02:39.709 CXX test/cpp_headers/dma.o 00:02:39.710 CC app/vhost/vhost.o 00:02:39.710 CXX test/cpp_headers/endian.o 00:02:39.710 CXX test/cpp_headers/env_dpdk.o 00:02:39.710 CXX test/cpp_headers/env.o 00:02:39.710 CXX test/cpp_headers/event.o 00:02:39.710 CXX test/cpp_headers/fd_group.o 00:02:39.710 CXX test/cpp_headers/fd.o 00:02:39.710 CXX test/cpp_headers/file.o 00:02:39.710 CXX test/cpp_headers/gpt_spec.o 00:02:39.710 CXX test/cpp_headers/ftl.o 00:02:39.710 CXX test/cpp_headers/hexlify.o 00:02:39.710 CXX test/cpp_headers/histogram_data.o 00:02:39.710 CC app/iscsi_tgt/iscsi_tgt.o 00:02:39.710 CXX test/cpp_headers/idxd.o 00:02:39.710 CXX test/cpp_headers/init.o 00:02:39.710 CXX test/cpp_headers/idxd_spec.o 00:02:39.710 CC examples/ioat/verify/verify.o 00:02:39.710 CC examples/ioat/perf/perf.o 00:02:39.710 CC examples/sock/hello_world/hello_sock.o 00:02:39.710 CC examples/util/zipf/zipf.o 00:02:39.710 CC examples/vmd/led/led.o 00:02:39.710 CC examples/nvme/hello_world/hello_world.o 00:02:39.710 CXX test/cpp_headers/ioat.o 00:02:39.710 CC examples/idxd/perf/perf.o 00:02:39.710 CC examples/vmd/lsvmd/lsvmd.o 00:02:39.710 CC test/app/jsoncat/jsoncat.o 00:02:39.710 CC examples/nvme/abort/abort.o 00:02:39.710 CC examples/nvme/reconnect/reconnect.o 00:02:39.710 CC test/app/stub/stub.o 00:02:39.710 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:39.710 CC app/fio/nvme/fio_plugin.o 00:02:39.710 CC examples/nvme/hotplug/hotplug.o 00:02:39.710 CC test/env/memory/memory_ut.o 00:02:39.710 CC examples/nvme/arbitration/arbitration.o 00:02:39.710 CC test/app/histogram_perf/histogram_perf.o 00:02:39.710 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:39.710 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:39.710 CC test/env/pci/pci_ut.o 00:02:39.710 CC test/env/vtophys/vtophys.o 00:02:39.710 CC test/event/event_perf/event_perf.o 00:02:39.710 CC examples/accel/perf/accel_perf.o 00:02:39.710 CC test/event/reactor/reactor.o 00:02:39.710 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:39.710 CC test/event/reactor_perf/reactor_perf.o 00:02:39.710 CC examples/thread/thread/thread_ex.o 00:02:39.710 CC test/nvme/sgl/sgl.o 00:02:39.710 CC test/nvme/overhead/overhead.o 00:02:39.710 CC examples/blob/cli/blobcli.o 00:02:39.710 CC test/nvme/e2edp/nvme_dp.o 00:02:39.710 CC test/thread/lock/spdk_lock.o 00:02:39.710 CC test/nvme/simple_copy/simple_copy.o 00:02:39.710 CC test/nvme/reserve/reserve.o 00:02:39.710 CC test/thread/poller_perf/poller_perf.o 00:02:39.710 CC test/nvme/connect_stress/connect_stress.o 00:02:39.710 CC test/nvme/compliance/nvme_compliance.o 00:02:39.710 CC examples/bdev/hello_world/hello_bdev.o 00:02:39.710 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:39.710 CC test/nvme/reset/reset.o 00:02:39.710 CC test/nvme/aer/aer.o 00:02:39.710 CC test/nvme/cuse/cuse.o 00:02:39.710 CC test/nvme/fdp/fdp.o 00:02:39.710 CC examples/blob/hello_world/hello_blob.o 00:02:39.710 CC test/nvme/fused_ordering/fused_ordering.o 00:02:39.710 CC test/nvme/err_injection/err_injection.o 00:02:39.710 CC test/event/app_repeat/app_repeat.o 00:02:39.710 CC test/nvme/startup/startup.o 00:02:39.710 CC test/blobfs/mkfs/mkfs.o 00:02:39.710 CC examples/bdev/bdevperf/bdevperf.o 00:02:39.710 CC test/nvme/boot_partition/boot_partition.o 00:02:39.710 CC test/accel/dif/dif.o 00:02:39.710 LINK spdk_lspci 00:02:39.710 CC app/fio/bdev/fio_plugin.o 00:02:39.710 CC examples/nvmf/nvmf/nvmf.o 00:02:39.710 CC test/bdev/bdevio/bdevio.o 00:02:39.710 CC test/dma/test_dma/test_dma.o 00:02:39.710 CC test/event/scheduler/scheduler.o 00:02:39.710 CC test/app/bdev_svc/bdev_svc.o 00:02:39.710 LINK spdk_nvme_discover 00:02:39.710 CC test/lvol/esnap/esnap.o 00:02:39.710 CC test/env/mem_callbacks/mem_callbacks.o 00:02:39.710 LINK interrupt_tgt 00:02:39.710 LINK rpc_client_test 00:02:39.710 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:39.710 CXX test/cpp_headers/ioat_spec.o 00:02:39.710 CXX test/cpp_headers/iscsi_spec.o 00:02:39.710 CXX test/cpp_headers/json.o 00:02:39.710 CXX test/cpp_headers/jsonrpc.o 00:02:39.710 CXX test/cpp_headers/likely.o 00:02:39.710 CXX test/cpp_headers/log.o 00:02:39.710 CXX test/cpp_headers/lvol.o 00:02:39.710 CXX test/cpp_headers/memory.o 00:02:39.710 CXX test/cpp_headers/mmio.o 00:02:39.710 CXX test/cpp_headers/nbd.o 00:02:39.710 CXX test/cpp_headers/notify.o 00:02:39.710 CXX test/cpp_headers/nvme.o 00:02:39.710 CXX test/cpp_headers/nvme_intel.o 00:02:39.710 LINK spdk_trace_record 00:02:39.710 CXX test/cpp_headers/nvme_ocssd.o 00:02:39.710 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:39.710 CXX test/cpp_headers/nvme_spec.o 00:02:39.710 CXX test/cpp_headers/nvme_zns.o 00:02:39.710 LINK lsvmd 00:02:39.710 CXX test/cpp_headers/nvmf_cmd.o 00:02:39.976 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:39.976 CXX test/cpp_headers/nvmf.o 00:02:39.976 CXX test/cpp_headers/nvmf_spec.o 00:02:39.976 CXX test/cpp_headers/nvmf_transport.o 00:02:39.976 CXX test/cpp_headers/opal.o 00:02:39.976 CXX test/cpp_headers/opal_spec.o 00:02:39.976 CXX test/cpp_headers/pci_ids.o 00:02:39.976 LINK zipf 00:02:39.976 LINK led 00:02:39.976 LINK jsoncat 00:02:39.976 CXX test/cpp_headers/pipe.o 00:02:39.976 CXX test/cpp_headers/queue.o 00:02:39.976 LINK nvmf_tgt 00:02:39.976 CXX test/cpp_headers/reduce.o 00:02:39.976 LINK vhost 00:02:39.976 CXX test/cpp_headers/rpc.o 00:02:39.976 LINK reactor 00:02:39.976 LINK event_perf 00:02:39.976 LINK vtophys 00:02:39.976 CXX test/cpp_headers/scheduler.o 00:02:39.976 LINK histogram_perf 00:02:39.976 LINK reactor_perf 00:02:39.976 LINK env_dpdk_post_init 00:02:39.976 CXX test/cpp_headers/scsi.o 00:02:39.976 LINK poller_perf 00:02:39.976 CXX test/cpp_headers/scsi_spec.o 00:02:39.976 CXX test/cpp_headers/sock.o 00:02:39.976 LINK spdk_tgt 00:02:39.976 LINK stub 00:02:39.976 LINK app_repeat 00:02:39.976 LINK pmr_persistence 00:02:39.976 LINK iscsi_tgt 00:02:39.976 LINK ioat_perf 00:02:39.976 LINK verify 00:02:39.976 CXX test/cpp_headers/stdinc.o 00:02:39.976 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:39.976 LINK startup 00:02:39.976 LINK cmb_copy 00:02:39.976 LINK boot_partition 00:02:39.977 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:39.977 LINK reserve 00:02:39.977 LINK doorbell_aers 00:02:39.977 LINK connect_stress 00:02:39.977 LINK err_injection 00:02:39.977 LINK hello_world 00:02:39.977 LINK hello_sock 00:02:39.977 LINK fused_ordering 00:02:39.977 LINK hotplug 00:02:39.977 LINK simple_copy 00:02:39.977 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:02:39.977 LINK mkfs 00:02:39.977 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:02:39.977 LINK nvme_dp 00:02:39.977 LINK bdev_svc 00:02:39.977 LINK hello_blob 00:02:39.977 LINK hello_bdev 00:02:39.977 CXX test/cpp_headers/string.o 00:02:39.977 LINK aer 00:02:39.977 CXX test/cpp_headers/thread.o 00:02:39.977 LINK thread 00:02:39.977 LINK reset 00:02:39.977 CXX test/cpp_headers/trace.o 00:02:39.977 CXX test/cpp_headers/trace_parser.o 00:02:39.977 LINK scheduler 00:02:39.977 LINK sgl 00:02:39.977 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:39.977 LINK overhead 00:02:39.977 CXX test/cpp_headers/tree.o 00:02:39.977 CXX test/cpp_headers/ublk.o 00:02:39.977 CXX test/cpp_headers/util.o 00:02:39.977 CXX test/cpp_headers/uuid.o 00:02:39.977 LINK fdp 00:02:39.977 CXX test/cpp_headers/version.o 00:02:39.977 LINK spdk_trace 00:02:39.977 CXX test/cpp_headers/vfio_user_pci.o 00:02:39.977 CXX test/cpp_headers/vfio_user_spec.o 00:02:39.977 CXX test/cpp_headers/vhost.o 00:02:39.977 CXX test/cpp_headers/vmd.o 00:02:39.977 CXX test/cpp_headers/xor.o 00:02:39.977 LINK nvmf 00:02:39.977 CXX test/cpp_headers/zipf.o 00:02:39.977 LINK idxd_perf 00:02:40.238 LINK reconnect 00:02:40.238 LINK abort 00:02:40.238 LINK arbitration 00:02:40.238 LINK test_dma 00:02:40.238 LINK dif 00:02:40.238 LINK bdevio 00:02:40.238 LINK spdk_dd 00:02:40.238 LINK pci_ut 00:02:40.238 LINK nvme_manage 00:02:40.238 LINK accel_perf 00:02:40.238 LINK blobcli 00:02:40.238 LINK nvme_compliance 00:02:40.497 LINK nvme_fuzz 00:02:40.497 LINK llvm_vfio_fuzz 00:02:40.497 LINK spdk_nvme 00:02:40.497 LINK spdk_bdev 00:02:40.497 LINK mem_callbacks 00:02:40.497 LINK spdk_nvme_identify 00:02:40.497 LINK vhost_fuzz 00:02:40.497 LINK memory_ut 00:02:40.756 LINK spdk_nvme_perf 00:02:40.756 LINK llvm_nvme_fuzz 00:02:40.756 LINK spdk_top 00:02:40.756 LINK bdevperf 00:02:41.014 LINK cuse 00:02:41.014 LINK spdk_lock 00:02:41.272 LINK iscsi_fuzz 00:02:43.177 LINK esnap 00:02:43.436 00:02:43.436 real 0m41.090s 00:02:43.436 user 5m44.192s 00:02:43.436 sys 2m46.711s 00:02:43.436 23:02:39 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:43.436 23:02:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.436 ************************************ 00:02:43.436 END TEST make 00:02:43.436 ************************************ 00:02:43.695 23:02:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:43.695 23:02:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:43.695 23:02:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:43.695 23:02:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:43.695 23:02:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:43.695 23:02:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:43.695 23:02:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:43.695 23:02:40 -- scripts/common.sh@335 -- # IFS=.-: 00:02:43.695 23:02:40 -- scripts/common.sh@335 -- # read -ra ver1 00:02:43.695 23:02:40 -- scripts/common.sh@336 -- # IFS=.-: 00:02:43.695 23:02:40 -- scripts/common.sh@336 -- # read -ra ver2 00:02:43.695 23:02:40 -- scripts/common.sh@337 -- # local 'op=<' 00:02:43.695 23:02:40 -- scripts/common.sh@339 -- # ver1_l=2 00:02:43.695 23:02:40 -- scripts/common.sh@340 -- # ver2_l=1 00:02:43.695 23:02:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:43.695 23:02:40 -- scripts/common.sh@343 -- # case "$op" in 00:02:43.695 23:02:40 -- scripts/common.sh@344 -- # : 1 00:02:43.695 23:02:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:43.695 23:02:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:43.695 23:02:40 -- scripts/common.sh@364 -- # decimal 1 00:02:43.695 23:02:40 -- scripts/common.sh@352 -- # local d=1 00:02:43.695 23:02:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:43.695 23:02:40 -- scripts/common.sh@354 -- # echo 1 00:02:43.695 23:02:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:43.695 23:02:40 -- scripts/common.sh@365 -- # decimal 2 00:02:43.695 23:02:40 -- scripts/common.sh@352 -- # local d=2 00:02:43.695 23:02:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:43.695 23:02:40 -- scripts/common.sh@354 -- # echo 2 00:02:43.695 23:02:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:43.696 23:02:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:43.696 23:02:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:43.696 23:02:40 -- scripts/common.sh@367 -- # return 0 00:02:43.696 23:02:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:43.696 23:02:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.696 --rc genhtml_branch_coverage=1 00:02:43.696 --rc genhtml_function_coverage=1 00:02:43.696 --rc genhtml_legend=1 00:02:43.696 --rc geninfo_all_blocks=1 00:02:43.696 --rc geninfo_unexecuted_blocks=1 00:02:43.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.696 ' 00:02:43.696 23:02:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.696 --rc genhtml_branch_coverage=1 00:02:43.696 --rc genhtml_function_coverage=1 00:02:43.696 --rc genhtml_legend=1 00:02:43.696 --rc geninfo_all_blocks=1 00:02:43.696 --rc geninfo_unexecuted_blocks=1 00:02:43.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.696 ' 00:02:43.696 23:02:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.696 --rc genhtml_branch_coverage=1 00:02:43.696 --rc genhtml_function_coverage=1 00:02:43.696 --rc genhtml_legend=1 00:02:43.696 --rc geninfo_all_blocks=1 00:02:43.696 --rc geninfo_unexecuted_blocks=1 00:02:43.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.696 ' 00:02:43.696 23:02:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:43.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:43.696 --rc genhtml_branch_coverage=1 00:02:43.696 --rc genhtml_function_coverage=1 00:02:43.696 --rc genhtml_legend=1 00:02:43.696 --rc geninfo_all_blocks=1 00:02:43.696 --rc geninfo_unexecuted_blocks=1 00:02:43.696 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:02:43.696 ' 00:02:43.696 23:02:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:02:43.696 23:02:40 -- nvmf/common.sh@7 -- # uname -s 00:02:43.696 23:02:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:43.696 23:02:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:43.696 23:02:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:43.696 23:02:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:43.696 23:02:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:43.696 23:02:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:43.696 23:02:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:43.696 23:02:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:43.696 23:02:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:43.696 23:02:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:43.696 23:02:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:02:43.696 23:02:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:02:43.696 23:02:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:43.696 23:02:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:43.696 23:02:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:43.696 23:02:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:02:43.696 23:02:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:43.696 23:02:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:43.696 23:02:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:43.696 23:02:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.696 23:02:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.696 23:02:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.696 23:02:40 -- paths/export.sh@5 -- # export PATH 00:02:43.696 23:02:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:43.696 23:02:40 -- nvmf/common.sh@46 -- # : 0 00:02:43.696 23:02:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:43.696 23:02:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:43.696 23:02:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:43.696 23:02:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:43.696 23:02:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:43.696 23:02:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:43.696 23:02:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:43.696 23:02:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:43.696 23:02:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:43.696 23:02:40 -- spdk/autotest.sh@32 -- # uname -s 00:02:43.696 23:02:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:43.696 23:02:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:43.696 23:02:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:43.696 23:02:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:43.696 23:02:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:02:43.696 23:02:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:43.696 23:02:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:43.696 23:02:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:43.696 23:02:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:43.696 23:02:40 -- spdk/autotest.sh@48 -- # udevadm_pid=1222608 00:02:43.696 23:02:40 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:43.696 23:02:40 -- spdk/autotest.sh@54 -- # echo 1222610 00:02:43.696 23:02:40 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:43.696 23:02:40 -- spdk/autotest.sh@56 -- # echo 1222611 00:02:43.696 23:02:40 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:02:43.696 23:02:40 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:43.696 23:02:40 -- spdk/autotest.sh@60 -- # echo 1222612 00:02:43.696 23:02:40 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:43.696 23:02:40 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l 00:02:43.696 23:02:40 -- spdk/autotest.sh@62 -- # echo 1222613 00:02:43.696 23:02:40 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:43.696 23:02:40 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:43.696 23:02:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:43.696 23:02:40 -- common/autotest_common.sh@10 -- # set +x 00:02:43.696 23:02:40 -- spdk/autotest.sh@70 -- # create_test_list 00:02:43.696 23:02:40 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:43.696 23:02:40 -- common/autotest_common.sh@10 -- # set +x 00:02:43.696 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:43.696 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:43.696 23:02:40 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:02:43.696 23:02:40 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:43.696 23:02:40 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:43.696 23:02:40 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:02:43.696 23:02:40 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:02:43.696 23:02:40 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:43.696 23:02:40 -- common/autotest_common.sh@1450 -- # uname 00:02:43.697 23:02:40 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:02:43.697 23:02:40 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:43.697 23:02:40 -- common/autotest_common.sh@1470 -- # uname 00:02:43.697 23:02:40 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:02:43.697 23:02:40 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:02:43.697 23:02:40 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:02:43.956 lcov: LCOV version 1.15 00:02:43.956 23:02:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:02:45.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:45.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:45.864 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:58.174 23:02:54 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:02:58.174 23:02:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:58.174 23:02:54 -- common/autotest_common.sh@10 -- # set +x 00:02:58.174 23:02:54 -- spdk/autotest.sh@89 -- # rm -f 00:02:58.174 23:02:54 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.468 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:01.468 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:01.726 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:01.727 23:02:58 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:01.727 23:02:58 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:01.727 23:02:58 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:01.727 23:02:58 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:01.727 23:02:58 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:01.727 23:02:58 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:01.727 23:02:58 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:01.727 23:02:58 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.727 23:02:58 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:01.727 23:02:58 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:01.727 23:02:58 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:01.727 23:02:58 -- spdk/autotest.sh@108 -- # grep -v p 00:03:01.727 23:02:58 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:01.727 23:02:58 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:01.727 23:02:58 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:01.727 23:02:58 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:01.727 23:02:58 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:01.727 No valid GPT data, bailing 00:03:01.727 23:02:58 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.727 23:02:58 -- scripts/common.sh@393 -- # pt= 00:03:01.727 23:02:58 -- scripts/common.sh@394 -- # return 1 00:03:01.727 23:02:58 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:01.727 1+0 records in 00:03:01.727 1+0 records out 00:03:01.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619634 s, 169 MB/s 00:03:01.727 23:02:58 -- spdk/autotest.sh@116 -- # sync 00:03:01.727 23:02:58 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:01.727 23:02:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:01.727 23:02:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:08.297 23:03:04 -- spdk/autotest.sh@122 -- # uname -s 00:03:08.297 23:03:04 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:08.297 23:03:04 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.297 23:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.297 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:03:08.297 ************************************ 00:03:08.297 START TEST setup.sh 00:03:08.297 ************************************ 00:03:08.297 23:03:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:03:08.297 * Looking for test storage... 00:03:08.297 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:08.297 23:03:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:08.297 23:03:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:08.297 23:03:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:08.297 23:03:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:08.297 23:03:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:08.297 23:03:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:08.297 23:03:04 -- scripts/common.sh@335 -- # IFS=.-: 00:03:08.297 23:03:04 -- scripts/common.sh@335 -- # read -ra ver1 00:03:08.297 23:03:04 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.297 23:03:04 -- scripts/common.sh@336 -- # read -ra ver2 00:03:08.297 23:03:04 -- scripts/common.sh@337 -- # local 'op=<' 00:03:08.297 23:03:04 -- scripts/common.sh@339 -- # ver1_l=2 00:03:08.297 23:03:04 -- scripts/common.sh@340 -- # ver2_l=1 00:03:08.297 23:03:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:08.297 23:03:04 -- scripts/common.sh@343 -- # case "$op" in 00:03:08.297 23:03:04 -- scripts/common.sh@344 -- # : 1 00:03:08.297 23:03:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:08.297 23:03:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.297 23:03:04 -- scripts/common.sh@364 -- # decimal 1 00:03:08.297 23:03:04 -- scripts/common.sh@352 -- # local d=1 00:03:08.297 23:03:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.297 23:03:04 -- scripts/common.sh@354 -- # echo 1 00:03:08.297 23:03:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:08.297 23:03:04 -- scripts/common.sh@365 -- # decimal 2 00:03:08.297 23:03:04 -- scripts/common.sh@352 -- # local d=2 00:03:08.297 23:03:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.297 23:03:04 -- scripts/common.sh@354 -- # echo 2 00:03:08.297 23:03:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:08.297 23:03:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:08.297 23:03:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:08.297 23:03:04 -- scripts/common.sh@367 -- # return 0 00:03:08.297 23:03:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.297 --rc genhtml_branch_coverage=1 00:03:08.297 --rc genhtml_function_coverage=1 00:03:08.297 --rc genhtml_legend=1 00:03:08.297 --rc geninfo_all_blocks=1 00:03:08.297 --rc geninfo_unexecuted_blocks=1 00:03:08.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.297 ' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.297 --rc genhtml_branch_coverage=1 00:03:08.297 --rc genhtml_function_coverage=1 00:03:08.297 --rc genhtml_legend=1 00:03:08.297 --rc geninfo_all_blocks=1 00:03:08.297 --rc geninfo_unexecuted_blocks=1 00:03:08.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.297 ' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.297 --rc genhtml_branch_coverage=1 00:03:08.297 --rc genhtml_function_coverage=1 00:03:08.297 --rc genhtml_legend=1 00:03:08.297 --rc geninfo_all_blocks=1 00:03:08.297 --rc geninfo_unexecuted_blocks=1 00:03:08.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.297 ' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.297 --rc genhtml_branch_coverage=1 00:03:08.297 --rc genhtml_function_coverage=1 00:03:08.297 --rc genhtml_legend=1 00:03:08.297 --rc geninfo_all_blocks=1 00:03:08.297 --rc geninfo_unexecuted_blocks=1 00:03:08.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.297 ' 00:03:08.297 23:03:04 -- setup/test-setup.sh@10 -- # uname -s 00:03:08.297 23:03:04 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:08.297 23:03:04 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:08.297 23:03:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:08.297 23:03:04 -- common/autotest_common.sh@10 -- # set +x 00:03:08.297 ************************************ 00:03:08.297 START TEST acl 00:03:08.297 ************************************ 00:03:08.297 23:03:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:03:08.297 * Looking for test storage... 00:03:08.297 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:08.297 23:03:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:08.297 23:03:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:08.297 23:03:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:08.297 23:03:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:08.297 23:03:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:08.297 23:03:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:08.297 23:03:04 -- scripts/common.sh@335 -- # IFS=.-: 00:03:08.297 23:03:04 -- scripts/common.sh@335 -- # read -ra ver1 00:03:08.297 23:03:04 -- scripts/common.sh@336 -- # IFS=.-: 00:03:08.297 23:03:04 -- scripts/common.sh@336 -- # read -ra ver2 00:03:08.297 23:03:04 -- scripts/common.sh@337 -- # local 'op=<' 00:03:08.297 23:03:04 -- scripts/common.sh@339 -- # ver1_l=2 00:03:08.297 23:03:04 -- scripts/common.sh@340 -- # ver2_l=1 00:03:08.297 23:03:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:08.297 23:03:04 -- scripts/common.sh@343 -- # case "$op" in 00:03:08.297 23:03:04 -- scripts/common.sh@344 -- # : 1 00:03:08.297 23:03:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:08.297 23:03:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.297 23:03:04 -- scripts/common.sh@364 -- # decimal 1 00:03:08.297 23:03:04 -- scripts/common.sh@352 -- # local d=1 00:03:08.297 23:03:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:08.297 23:03:04 -- scripts/common.sh@354 -- # echo 1 00:03:08.297 23:03:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:08.297 23:03:04 -- scripts/common.sh@365 -- # decimal 2 00:03:08.297 23:03:04 -- scripts/common.sh@352 -- # local d=2 00:03:08.297 23:03:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:08.297 23:03:04 -- scripts/common.sh@354 -- # echo 2 00:03:08.297 23:03:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:08.297 23:03:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:08.297 23:03:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:08.297 23:03:04 -- scripts/common.sh@367 -- # return 0 00:03:08.297 23:03:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:08.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.297 --rc genhtml_branch_coverage=1 00:03:08.297 --rc genhtml_function_coverage=1 00:03:08.297 --rc genhtml_legend=1 00:03:08.297 --rc geninfo_all_blocks=1 00:03:08.297 --rc geninfo_unexecuted_blocks=1 00:03:08.297 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.297 ' 00:03:08.297 23:03:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.298 --rc genhtml_branch_coverage=1 00:03:08.298 --rc genhtml_function_coverage=1 00:03:08.298 --rc genhtml_legend=1 00:03:08.298 --rc geninfo_all_blocks=1 00:03:08.298 --rc geninfo_unexecuted_blocks=1 00:03:08.298 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.298 ' 00:03:08.298 23:03:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.298 --rc genhtml_branch_coverage=1 00:03:08.298 --rc genhtml_function_coverage=1 00:03:08.298 --rc genhtml_legend=1 00:03:08.298 --rc geninfo_all_blocks=1 00:03:08.298 --rc geninfo_unexecuted_blocks=1 00:03:08.298 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.298 ' 00:03:08.298 23:03:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:08.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:08.298 --rc genhtml_branch_coverage=1 00:03:08.298 --rc genhtml_function_coverage=1 00:03:08.298 --rc genhtml_legend=1 00:03:08.298 --rc geninfo_all_blocks=1 00:03:08.298 --rc geninfo_unexecuted_blocks=1 00:03:08.298 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:08.298 ' 00:03:08.298 23:03:04 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:08.298 23:03:04 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:08.298 23:03:04 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:08.298 23:03:04 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:08.298 23:03:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:08.298 23:03:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:08.298 23:03:04 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:08.298 23:03:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.298 23:03:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:08.298 23:03:04 -- setup/acl.sh@12 -- # devs=() 00:03:08.298 23:03:04 -- setup/acl.sh@12 -- # declare -a devs 00:03:08.298 23:03:04 -- setup/acl.sh@13 -- # drivers=() 00:03:08.298 23:03:04 -- setup/acl.sh@13 -- # declare -A drivers 00:03:08.298 23:03:04 -- setup/acl.sh@51 -- # setup reset 00:03:08.298 23:03:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.298 23:03:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:12.492 23:03:08 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:12.492 23:03:08 -- setup/acl.sh@16 -- # local dev driver 00:03:12.492 23:03:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.492 23:03:08 -- setup/acl.sh@15 -- # setup output status 00:03:12.492 23:03:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.492 23:03:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:15.786 Hugepages 00:03:15.786 node hugesize free / total 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 00:03:15.786 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:11 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:15.786 23:03:11 -- setup/acl.sh@20 -- # continue 00:03:15.786 23:03:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:12 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:15.786 23:03:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:15.786 23:03:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:15.786 23:03:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:15.786 23:03:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:15.786 23:03:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.786 23:03:12 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:15.786 23:03:12 -- setup/acl.sh@54 -- # run_test denied denied 00:03:15.786 23:03:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.786 23:03:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.786 23:03:12 -- common/autotest_common.sh@10 -- # set +x 00:03:15.786 ************************************ 00:03:15.786 START TEST denied 00:03:15.786 ************************************ 00:03:15.786 23:03:12 -- common/autotest_common.sh@1114 -- # denied 00:03:15.786 23:03:12 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:15.786 23:03:12 -- setup/acl.sh@38 -- # setup output config 00:03:15.786 23:03:12 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:15.786 23:03:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.786 23:03:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:19.987 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:19.987 23:03:15 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:19.987 23:03:15 -- setup/acl.sh@28 -- # local dev driver 00:03:19.987 23:03:15 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:19.987 23:03:15 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:19.987 23:03:15 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:19.987 23:03:15 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:19.987 23:03:15 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:19.987 23:03:15 -- setup/acl.sh@41 -- # setup reset 00:03:19.987 23:03:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.987 23:03:15 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.221 00:03:24.221 real 0m8.324s 00:03:24.221 user 0m2.736s 00:03:24.221 sys 0m4.963s 00:03:24.221 23:03:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:24.221 23:03:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.221 ************************************ 00:03:24.221 END TEST denied 00:03:24.221 ************************************ 00:03:24.221 23:03:20 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:24.221 23:03:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:24.221 23:03:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:24.221 23:03:20 -- common/autotest_common.sh@10 -- # set +x 00:03:24.221 ************************************ 00:03:24.221 START TEST allowed 00:03:24.221 ************************************ 00:03:24.221 23:03:20 -- common/autotest_common.sh@1114 -- # allowed 00:03:24.221 23:03:20 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:24.221 23:03:20 -- setup/acl.sh@45 -- # setup output config 00:03:24.221 23:03:20 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:24.221 23:03:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.222 23:03:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:03:29.494 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:29.494 23:03:25 -- setup/acl.sh@47 -- # verify 00:03:29.494 23:03:25 -- setup/acl.sh@28 -- # local dev driver 00:03:29.494 23:03:25 -- setup/acl.sh@48 -- # setup reset 00:03:29.494 23:03:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.494 23:03:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.785 00:03:32.785 real 0m8.227s 00:03:32.785 user 0m2.142s 00:03:32.785 sys 0m4.486s 00:03:32.785 23:03:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.785 23:03:28 -- common/autotest_common.sh@10 -- # set +x 00:03:32.785 ************************************ 00:03:32.785 END TEST allowed 00:03:32.785 ************************************ 00:03:32.785 00:03:32.785 real 0m24.045s 00:03:32.785 user 0m7.544s 00:03:32.785 sys 0m14.569s 00:03:32.785 23:03:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:32.785 23:03:28 -- common/autotest_common.sh@10 -- # set +x 00:03:32.785 ************************************ 00:03:32.785 END TEST acl 00:03:32.785 ************************************ 00:03:32.785 23:03:28 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:32.785 23:03:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.785 23:03:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.785 23:03:28 -- common/autotest_common.sh@10 -- # set +x 00:03:32.785 ************************************ 00:03:32.785 START TEST hugepages 00:03:32.785 ************************************ 00:03:32.785 23:03:28 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:03:32.785 * Looking for test storage... 00:03:32.785 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:32.785 23:03:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:32.785 23:03:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:32.785 23:03:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:32.785 23:03:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:32.785 23:03:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:32.786 23:03:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:32.786 23:03:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:32.786 23:03:28 -- scripts/common.sh@335 -- # IFS=.-: 00:03:32.786 23:03:28 -- scripts/common.sh@335 -- # read -ra ver1 00:03:32.786 23:03:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.786 23:03:28 -- scripts/common.sh@336 -- # read -ra ver2 00:03:32.786 23:03:28 -- scripts/common.sh@337 -- # local 'op=<' 00:03:32.786 23:03:28 -- scripts/common.sh@339 -- # ver1_l=2 00:03:32.786 23:03:28 -- scripts/common.sh@340 -- # ver2_l=1 00:03:32.786 23:03:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:32.786 23:03:28 -- scripts/common.sh@343 -- # case "$op" in 00:03:32.786 23:03:28 -- scripts/common.sh@344 -- # : 1 00:03:32.786 23:03:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:32.786 23:03:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.786 23:03:28 -- scripts/common.sh@364 -- # decimal 1 00:03:32.786 23:03:28 -- scripts/common.sh@352 -- # local d=1 00:03:32.786 23:03:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.786 23:03:28 -- scripts/common.sh@354 -- # echo 1 00:03:32.786 23:03:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:32.786 23:03:28 -- scripts/common.sh@365 -- # decimal 2 00:03:32.786 23:03:28 -- scripts/common.sh@352 -- # local d=2 00:03:32.786 23:03:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.786 23:03:28 -- scripts/common.sh@354 -- # echo 2 00:03:32.786 23:03:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:32.786 23:03:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:32.786 23:03:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:32.786 23:03:28 -- scripts/common.sh@367 -- # return 0 00:03:32.786 23:03:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.786 23:03:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:32.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.786 --rc genhtml_branch_coverage=1 00:03:32.786 --rc genhtml_function_coverage=1 00:03:32.786 --rc genhtml_legend=1 00:03:32.786 --rc geninfo_all_blocks=1 00:03:32.786 --rc geninfo_unexecuted_blocks=1 00:03:32.786 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:32.786 ' 00:03:32.786 23:03:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:32.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.786 --rc genhtml_branch_coverage=1 00:03:32.786 --rc genhtml_function_coverage=1 00:03:32.786 --rc genhtml_legend=1 00:03:32.786 --rc geninfo_all_blocks=1 00:03:32.786 --rc geninfo_unexecuted_blocks=1 00:03:32.786 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:32.786 ' 00:03:32.786 23:03:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:32.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.786 --rc genhtml_branch_coverage=1 00:03:32.786 --rc genhtml_function_coverage=1 00:03:32.786 --rc genhtml_legend=1 00:03:32.786 --rc geninfo_all_blocks=1 00:03:32.786 --rc geninfo_unexecuted_blocks=1 00:03:32.786 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:32.786 ' 00:03:32.786 23:03:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:32.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.786 --rc genhtml_branch_coverage=1 00:03:32.786 --rc genhtml_function_coverage=1 00:03:32.786 --rc genhtml_legend=1 00:03:32.786 --rc geninfo_all_blocks=1 00:03:32.786 --rc geninfo_unexecuted_blocks=1 00:03:32.786 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:32.786 ' 00:03:32.786 23:03:28 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:32.786 23:03:28 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:32.786 23:03:28 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:32.786 23:03:28 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:32.786 23:03:28 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:32.786 23:03:28 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:32.786 23:03:28 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:32.786 23:03:28 -- setup/common.sh@18 -- # local node= 00:03:32.786 23:03:28 -- setup/common.sh@19 -- # local var val 00:03:32.786 23:03:28 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.786 23:03:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.786 23:03:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.786 23:03:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.786 23:03:28 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.786 23:03:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 40799488 kB' 'MemAvailable: 44519184 kB' 'Buffers: 8956 kB' 'Cached: 11196596 kB' 'SwapCached: 0 kB' 'Active: 7976528 kB' 'Inactive: 3688388 kB' 'Active(anon): 7559824 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462652 kB' 'Mapped: 165748 kB' 'Shmem: 7100460 kB' 'KReclaimable: 222944 kB' 'Slab: 911772 kB' 'SReclaimable: 222944 kB' 'SUnreclaim: 688828 kB' 'KernelStack: 21808 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433332 kB' 'Committed_AS: 8798224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214144 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.786 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.786 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:28 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:28 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # continue 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.787 23:03:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.787 23:03:29 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:32.787 23:03:29 -- setup/common.sh@33 -- # echo 2048 00:03:32.787 23:03:29 -- setup/common.sh@33 -- # return 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:32.787 23:03:29 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:32.787 23:03:29 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:32.787 23:03:29 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:32.787 23:03:29 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:32.787 23:03:29 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:32.787 23:03:29 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:32.787 23:03:29 -- setup/hugepages.sh@207 -- # get_nodes 00:03:32.787 23:03:29 -- setup/hugepages.sh@27 -- # local node 00:03:32.787 23:03:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.787 23:03:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:32.787 23:03:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.787 23:03:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.787 23:03:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.787 23:03:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.787 23:03:29 -- setup/hugepages.sh@208 -- # clear_hp 00:03:32.787 23:03:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:32.787 23:03:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.787 23:03:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.787 23:03:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.787 23:03:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:32.787 23:03:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.787 23:03:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:32.787 23:03:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:32.787 23:03:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:32.787 23:03:29 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:32.787 23:03:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.787 23:03:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.787 23:03:29 -- common/autotest_common.sh@10 -- # set +x 00:03:32.787 ************************************ 00:03:32.787 START TEST default_setup 00:03:32.787 ************************************ 00:03:32.787 23:03:29 -- common/autotest_common.sh@1114 -- # default_setup 00:03:32.787 23:03:29 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.787 23:03:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.787 23:03:29 -- setup/hugepages.sh@51 -- # shift 00:03:32.787 23:03:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.787 23:03:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.787 23:03:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.787 23:03:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.787 23:03:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.787 23:03:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.787 23:03:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.787 23:03:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.787 23:03:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.787 23:03:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.787 23:03:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.787 23:03:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.787 23:03:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.787 23:03:29 -- setup/hugepages.sh@73 -- # return 0 00:03:32.787 23:03:29 -- setup/hugepages.sh@137 -- # setup output 00:03:32.787 23:03:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.787 23:03:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:36.080 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:36.080 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.473 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.473 23:03:33 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:37.473 23:03:33 -- setup/hugepages.sh@89 -- # local node 00:03:37.473 23:03:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.473 23:03:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.473 23:03:33 -- setup/hugepages.sh@92 -- # local surp 00:03:37.474 23:03:33 -- setup/hugepages.sh@93 -- # local resv 00:03:37.474 23:03:33 -- setup/hugepages.sh@94 -- # local anon 00:03:37.474 23:03:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.474 23:03:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.474 23:03:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.474 23:03:33 -- setup/common.sh@18 -- # local node= 00:03:37.474 23:03:33 -- setup/common.sh@19 -- # local var val 00:03:37.474 23:03:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.474 23:03:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.474 23:03:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.474 23:03:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.474 23:03:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.474 23:03:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43013552 kB' 'MemAvailable: 46732940 kB' 'Buffers: 8956 kB' 'Cached: 11196724 kB' 'SwapCached: 0 kB' 'Active: 7976944 kB' 'Inactive: 3688388 kB' 'Active(anon): 7560240 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463084 kB' 'Mapped: 165864 kB' 'Shmem: 7100588 kB' 'KReclaimable: 222328 kB' 'Slab: 910936 kB' 'SReclaimable: 222328 kB' 'SUnreclaim: 688608 kB' 'KernelStack: 21888 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214208 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.474 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.474 23:03:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.475 23:03:33 -- setup/common.sh@33 -- # echo 0 00:03:37.475 23:03:33 -- setup/common.sh@33 -- # return 0 00:03:37.475 23:03:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.475 23:03:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.475 23:03:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.475 23:03:33 -- setup/common.sh@18 -- # local node= 00:03:37.475 23:03:33 -- setup/common.sh@19 -- # local var val 00:03:37.475 23:03:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.475 23:03:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.475 23:03:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.475 23:03:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.475 23:03:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.475 23:03:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43015988 kB' 'MemAvailable: 46735364 kB' 'Buffers: 8956 kB' 'Cached: 11196728 kB' 'SwapCached: 0 kB' 'Active: 7977832 kB' 'Inactive: 3688388 kB' 'Active(anon): 7561128 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464064 kB' 'Mapped: 165864 kB' 'Shmem: 7100592 kB' 'KReclaimable: 222296 kB' 'Slab: 910844 kB' 'SReclaimable: 222296 kB' 'SUnreclaim: 688548 kB' 'KernelStack: 21904 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8808764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.475 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.475 23:03:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.476 23:03:33 -- setup/common.sh@33 -- # echo 0 00:03:37.476 23:03:33 -- setup/common.sh@33 -- # return 0 00:03:37.476 23:03:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.476 23:03:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.476 23:03:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.476 23:03:33 -- setup/common.sh@18 -- # local node= 00:03:37.476 23:03:33 -- setup/common.sh@19 -- # local var val 00:03:37.476 23:03:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.476 23:03:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.476 23:03:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.476 23:03:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.476 23:03:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.476 23:03:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.476 23:03:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43021076 kB' 'MemAvailable: 46740448 kB' 'Buffers: 8956 kB' 'Cached: 11196740 kB' 'SwapCached: 0 kB' 'Active: 7976652 kB' 'Inactive: 3688388 kB' 'Active(anon): 7559948 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463364 kB' 'Mapped: 165864 kB' 'Shmem: 7100604 kB' 'KReclaimable: 222296 kB' 'Slab: 910820 kB' 'SReclaimable: 222296 kB' 'SUnreclaim: 688524 kB' 'KernelStack: 21904 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214224 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.476 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.476 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.477 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.477 23:03:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.478 23:03:33 -- setup/common.sh@33 -- # echo 0 00:03:37.478 23:03:33 -- setup/common.sh@33 -- # return 0 00:03:37.478 23:03:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:37.478 23:03:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.478 nr_hugepages=1024 00:03:37.478 23:03:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.478 resv_hugepages=0 00:03:37.478 23:03:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.478 surplus_hugepages=0 00:03:37.478 23:03:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.478 anon_hugepages=0 00:03:37.478 23:03:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.478 23:03:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.478 23:03:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.478 23:03:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.478 23:03:33 -- setup/common.sh@18 -- # local node= 00:03:37.478 23:03:33 -- setup/common.sh@19 -- # local var val 00:03:37.478 23:03:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.478 23:03:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.478 23:03:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.478 23:03:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.478 23:03:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.478 23:03:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43018364 kB' 'MemAvailable: 46737736 kB' 'Buffers: 8956 kB' 'Cached: 11196760 kB' 'SwapCached: 0 kB' 'Active: 7977080 kB' 'Inactive: 3688388 kB' 'Active(anon): 7560376 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463156 kB' 'Mapped: 165864 kB' 'Shmem: 7100624 kB' 'KReclaimable: 222296 kB' 'Slab: 910796 kB' 'SReclaimable: 222296 kB' 'SUnreclaim: 688500 kB' 'KernelStack: 21936 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214256 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.478 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.478 23:03:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.479 23:03:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.479 23:03:33 -- setup/common.sh@33 -- # echo 1024 00:03:37.479 23:03:33 -- setup/common.sh@33 -- # return 0 00:03:37.479 23:03:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.479 23:03:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.479 23:03:33 -- setup/hugepages.sh@27 -- # local node 00:03:37.479 23:03:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.479 23:03:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.479 23:03:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.479 23:03:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.479 23:03:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.479 23:03:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.479 23:03:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.479 23:03:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.479 23:03:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.479 23:03:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.479 23:03:33 -- setup/common.sh@18 -- # local node=0 00:03:37.479 23:03:33 -- setup/common.sh@19 -- # local var val 00:03:37.479 23:03:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.479 23:03:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.479 23:03:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.479 23:03:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.479 23:03:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.479 23:03:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.479 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25470272 kB' 'MemUsed: 7115096 kB' 'SwapCached: 0 kB' 'Active: 2829388 kB' 'Inactive: 189036 kB' 'Active(anon): 2639716 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2851732 kB' 'Mapped: 77200 kB' 'AnonPages: 169932 kB' 'Shmem: 2473024 kB' 'KernelStack: 10984 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129548 kB' 'Slab: 481120 kB' 'SReclaimable: 129548 kB' 'SUnreclaim: 351572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:33 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # continue 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.480 23:03:34 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.480 23:03:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.480 23:03:34 -- setup/common.sh@33 -- # echo 0 00:03:37.480 23:03:34 -- setup/common.sh@33 -- # return 0 00:03:37.480 23:03:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.480 23:03:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.480 23:03:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.480 23:03:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.480 23:03:34 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.480 node0=1024 expecting 1024 00:03:37.480 23:03:34 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.480 00:03:37.480 real 0m4.975s 00:03:37.480 user 0m1.227s 00:03:37.480 sys 0m2.114s 00:03:37.480 23:03:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:37.480 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:03:37.480 ************************************ 00:03:37.480 END TEST default_setup 00:03:37.481 ************************************ 00:03:37.481 23:03:34 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:37.481 23:03:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:37.481 23:03:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:37.481 23:03:34 -- common/autotest_common.sh@10 -- # set +x 00:03:37.481 ************************************ 00:03:37.481 START TEST per_node_1G_alloc 00:03:37.481 ************************************ 00:03:37.481 23:03:34 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:03:37.481 23:03:34 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:37.481 23:03:34 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:37.481 23:03:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:37.481 23:03:34 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:37.481 23:03:34 -- setup/hugepages.sh@51 -- # shift 00:03:37.481 23:03:34 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:37.481 23:03:34 -- setup/hugepages.sh@52 -- # local node_ids 00:03:37.481 23:03:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:37.481 23:03:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:37.481 23:03:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:37.481 23:03:34 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:37.481 23:03:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:37.481 23:03:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:37.481 23:03:34 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:37.481 23:03:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:37.481 23:03:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:37.481 23:03:34 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:37.481 23:03:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.481 23:03:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.481 23:03:34 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:37.481 23:03:34 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:37.481 23:03:34 -- setup/hugepages.sh@73 -- # return 0 00:03:37.481 23:03:34 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:37.481 23:03:34 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:37.481 23:03:34 -- setup/hugepages.sh@146 -- # setup output 00:03:37.481 23:03:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.481 23:03:34 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:40.865 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.865 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.865 23:03:37 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:40.865 23:03:37 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.865 23:03:37 -- setup/hugepages.sh@89 -- # local node 00:03:40.865 23:03:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.865 23:03:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.865 23:03:37 -- setup/hugepages.sh@92 -- # local surp 00:03:40.865 23:03:37 -- setup/hugepages.sh@93 -- # local resv 00:03:40.865 23:03:37 -- setup/hugepages.sh@94 -- # local anon 00:03:40.865 23:03:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.865 23:03:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.865 23:03:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.865 23:03:37 -- setup/common.sh@18 -- # local node= 00:03:40.865 23:03:37 -- setup/common.sh@19 -- # local var val 00:03:40.865 23:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.865 23:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.865 23:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.865 23:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.865 23:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.865 23:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.865 23:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43013952 kB' 'MemAvailable: 46733320 kB' 'Buffers: 8956 kB' 'Cached: 11205036 kB' 'SwapCached: 0 kB' 'Active: 7990732 kB' 'Inactive: 3688388 kB' 'Active(anon): 7574028 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 468432 kB' 'Mapped: 166416 kB' 'Shmem: 7108900 kB' 'KReclaimable: 222288 kB' 'Slab: 910960 kB' 'SReclaimable: 222288 kB' 'SUnreclaim: 688672 kB' 'KernelStack: 21808 kB' 'PageTables: 7580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8813024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214484 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.865 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.865 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.866 23:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.866 23:03:37 -- setup/common.sh@33 -- # echo 0 00:03:40.866 23:03:37 -- setup/common.sh@33 -- # return 0 00:03:40.866 23:03:37 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.866 23:03:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.866 23:03:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.866 23:03:37 -- setup/common.sh@18 -- # local node= 00:03:40.866 23:03:37 -- setup/common.sh@19 -- # local var val 00:03:40.866 23:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.866 23:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.866 23:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.866 23:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.866 23:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.866 23:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.866 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43015620 kB' 'MemAvailable: 46734988 kB' 'Buffers: 8956 kB' 'Cached: 11205040 kB' 'SwapCached: 0 kB' 'Active: 7987072 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570368 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464356 kB' 'Mapped: 166376 kB' 'Shmem: 7108904 kB' 'KReclaimable: 222288 kB' 'Slab: 910928 kB' 'SReclaimable: 222288 kB' 'SUnreclaim: 688640 kB' 'KernelStack: 22016 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8809524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214464 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.867 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.867 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.868 23:03:37 -- setup/common.sh@33 -- # echo 0 00:03:40.868 23:03:37 -- setup/common.sh@33 -- # return 0 00:03:40.868 23:03:37 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.868 23:03:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.868 23:03:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.868 23:03:37 -- setup/common.sh@18 -- # local node= 00:03:40.868 23:03:37 -- setup/common.sh@19 -- # local var val 00:03:40.868 23:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.868 23:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.868 23:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.868 23:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.868 23:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.868 23:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43018560 kB' 'MemAvailable: 46737928 kB' 'Buffers: 8956 kB' 'Cached: 11205052 kB' 'SwapCached: 0 kB' 'Active: 7992644 kB' 'Inactive: 3688388 kB' 'Active(anon): 7575940 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 469904 kB' 'Mapped: 166464 kB' 'Shmem: 7108916 kB' 'KReclaimable: 222288 kB' 'Slab: 910928 kB' 'SReclaimable: 222288 kB' 'SUnreclaim: 688640 kB' 'KernelStack: 22128 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8814568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214484 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.868 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.868 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.869 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.869 23:03:37 -- setup/common.sh@33 -- # echo 0 00:03:40.869 23:03:37 -- setup/common.sh@33 -- # return 0 00:03:40.869 23:03:37 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.869 23:03:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.869 nr_hugepages=1024 00:03:40.869 23:03:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.869 resv_hugepages=0 00:03:40.869 23:03:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.869 surplus_hugepages=0 00:03:40.869 23:03:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.869 anon_hugepages=0 00:03:40.869 23:03:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.869 23:03:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.869 23:03:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.869 23:03:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.869 23:03:37 -- setup/common.sh@18 -- # local node= 00:03:40.869 23:03:37 -- setup/common.sh@19 -- # local var val 00:03:40.869 23:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.869 23:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.869 23:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.869 23:03:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.869 23:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.869 23:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.869 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43026056 kB' 'MemAvailable: 46745424 kB' 'Buffers: 8956 kB' 'Cached: 11205064 kB' 'SwapCached: 0 kB' 'Active: 7987200 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570496 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464980 kB' 'Mapped: 165884 kB' 'Shmem: 7108928 kB' 'KReclaimable: 222288 kB' 'Slab: 910888 kB' 'SReclaimable: 222288 kB' 'SUnreclaim: 688600 kB' 'KernelStack: 22352 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8820728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214480 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.870 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.870 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.871 23:03:37 -- setup/common.sh@33 -- # echo 1024 00:03:40.871 23:03:37 -- setup/common.sh@33 -- # return 0 00:03:40.871 23:03:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.871 23:03:37 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.871 23:03:37 -- setup/hugepages.sh@27 -- # local node 00:03:40.871 23:03:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.871 23:03:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.871 23:03:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.871 23:03:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.871 23:03:37 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.871 23:03:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.871 23:03:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.871 23:03:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.871 23:03:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.871 23:03:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.871 23:03:37 -- setup/common.sh@18 -- # local node=0 00:03:40.871 23:03:37 -- setup/common.sh@19 -- # local var val 00:03:40.871 23:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.871 23:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.871 23:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.871 23:03:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.871 23:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.871 23:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.871 23:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26518452 kB' 'MemUsed: 6066916 kB' 'SwapCached: 0 kB' 'Active: 2837192 kB' 'Inactive: 189036 kB' 'Active(anon): 2647520 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2859932 kB' 'Mapped: 77208 kB' 'AnonPages: 169488 kB' 'Shmem: 2481224 kB' 'KernelStack: 11032 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129540 kB' 'Slab: 481156 kB' 'SReclaimable: 129540 kB' 'SUnreclaim: 351616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.871 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.871 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@33 -- # echo 0 00:03:40.872 23:03:37 -- setup/common.sh@33 -- # return 0 00:03:40.872 23:03:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.872 23:03:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.872 23:03:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.872 23:03:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.872 23:03:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.872 23:03:37 -- setup/common.sh@18 -- # local node=1 00:03:40.872 23:03:37 -- setup/common.sh@19 -- # local var val 00:03:40.872 23:03:37 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.872 23:03:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.872 23:03:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.872 23:03:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.872 23:03:37 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.872 23:03:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698396 kB' 'MemFree: 16511624 kB' 'MemUsed: 11186772 kB' 'SwapCached: 0 kB' 'Active: 5148428 kB' 'Inactive: 3499352 kB' 'Active(anon): 4921396 kB' 'Inactive(anon): 0 kB' 'Active(file): 227032 kB' 'Inactive(file): 3499352 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354120 kB' 'Mapped: 88676 kB' 'AnonPages: 293812 kB' 'Shmem: 4627736 kB' 'KernelStack: 10904 kB' 'PageTables: 3788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92748 kB' 'Slab: 429692 kB' 'SReclaimable: 92748 kB' 'SUnreclaim: 336944 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.872 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.872 23:03:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # continue 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.873 23:03:37 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.873 23:03:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.873 23:03:37 -- setup/common.sh@33 -- # echo 0 00:03:40.873 23:03:37 -- setup/common.sh@33 -- # return 0 00:03:40.873 23:03:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.873 23:03:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.873 23:03:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.873 23:03:37 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.873 node0=512 expecting 512 00:03:40.873 23:03:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.873 23:03:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.873 23:03:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.873 23:03:37 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.873 node1=512 expecting 512 00:03:40.873 23:03:37 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.873 00:03:40.873 real 0m3.360s 00:03:40.873 user 0m1.344s 00:03:40.873 sys 0m2.080s 00:03:40.873 23:03:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:40.873 23:03:37 -- common/autotest_common.sh@10 -- # set +x 00:03:40.873 ************************************ 00:03:40.873 END TEST per_node_1G_alloc 00:03:40.873 ************************************ 00:03:40.873 23:03:37 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.873 23:03:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.873 23:03:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.873 23:03:37 -- common/autotest_common.sh@10 -- # set +x 00:03:40.873 ************************************ 00:03:40.873 START TEST even_2G_alloc 00:03:40.873 ************************************ 00:03:40.873 23:03:37 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:03:40.873 23:03:37 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.873 23:03:37 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.873 23:03:37 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.873 23:03:37 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.873 23:03:37 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.873 23:03:37 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.873 23:03:37 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.873 23:03:37 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.873 23:03:37 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.873 23:03:37 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.873 23:03:37 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.873 23:03:37 -- setup/hugepages.sh@83 -- # : 512 00:03:40.873 23:03:37 -- setup/hugepages.sh@84 -- # : 1 00:03:40.873 23:03:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.873 23:03:37 -- setup/hugepages.sh@83 -- # : 0 00:03:40.873 23:03:37 -- setup/hugepages.sh@84 -- # : 0 00:03:40.873 23:03:37 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.873 23:03:37 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.873 23:03:37 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.873 23:03:37 -- setup/hugepages.sh@153 -- # setup output 00:03:40.873 23:03:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.873 23:03:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:44.168 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.168 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.433 23:03:40 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.433 23:03:40 -- setup/hugepages.sh@89 -- # local node 00:03:44.433 23:03:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.433 23:03:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.433 23:03:40 -- setup/hugepages.sh@92 -- # local surp 00:03:44.433 23:03:40 -- setup/hugepages.sh@93 -- # local resv 00:03:44.433 23:03:40 -- setup/hugepages.sh@94 -- # local anon 00:03:44.433 23:03:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.433 23:03:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.433 23:03:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.433 23:03:40 -- setup/common.sh@18 -- # local node= 00:03:44.433 23:03:40 -- setup/common.sh@19 -- # local var val 00:03:44.433 23:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.433 23:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.433 23:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.433 23:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.433 23:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.433 23:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43046956 kB' 'MemAvailable: 46766320 kB' 'Buffers: 8956 kB' 'Cached: 11205172 kB' 'SwapCached: 0 kB' 'Active: 7984800 kB' 'Inactive: 3688388 kB' 'Active(anon): 7568096 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461708 kB' 'Mapped: 165892 kB' 'Shmem: 7109036 kB' 'KReclaimable: 222280 kB' 'Slab: 910800 kB' 'SReclaimable: 222280 kB' 'SUnreclaim: 688520 kB' 'KernelStack: 21808 kB' 'PageTables: 7448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8830488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.433 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.433 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.434 23:03:40 -- setup/common.sh@33 -- # echo 0 00:03:44.434 23:03:40 -- setup/common.sh@33 -- # return 0 00:03:44.434 23:03:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.434 23:03:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.434 23:03:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.434 23:03:40 -- setup/common.sh@18 -- # local node= 00:03:44.434 23:03:40 -- setup/common.sh@19 -- # local var val 00:03:44.434 23:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.434 23:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.434 23:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.434 23:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.434 23:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.434 23:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43047228 kB' 'MemAvailable: 46766592 kB' 'Buffers: 8956 kB' 'Cached: 11205180 kB' 'SwapCached: 0 kB' 'Active: 7984056 kB' 'Inactive: 3688388 kB' 'Active(anon): 7567352 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461468 kB' 'Mapped: 165772 kB' 'Shmem: 7109044 kB' 'KReclaimable: 222280 kB' 'Slab: 910796 kB' 'SReclaimable: 222280 kB' 'SUnreclaim: 688516 kB' 'KernelStack: 21824 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8830504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.434 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.434 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.435 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.435 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.435 23:03:40 -- setup/common.sh@33 -- # echo 0 00:03:44.435 23:03:40 -- setup/common.sh@33 -- # return 0 00:03:44.435 23:03:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.435 23:03:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.435 23:03:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.435 23:03:40 -- setup/common.sh@18 -- # local node= 00:03:44.435 23:03:40 -- setup/common.sh@19 -- # local var val 00:03:44.435 23:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.435 23:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.435 23:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.435 23:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.435 23:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.435 23:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.436 23:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43047260 kB' 'MemAvailable: 46766616 kB' 'Buffers: 8956 kB' 'Cached: 11205196 kB' 'SwapCached: 0 kB' 'Active: 7984560 kB' 'Inactive: 3688388 kB' 'Active(anon): 7567856 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462004 kB' 'Mapped: 165772 kB' 'Shmem: 7109060 kB' 'KReclaimable: 222264 kB' 'Slab: 910780 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 688516 kB' 'KernelStack: 21872 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8830892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.436 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.436 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.437 23:03:40 -- setup/common.sh@33 -- # echo 0 00:03:44.437 23:03:40 -- setup/common.sh@33 -- # return 0 00:03:44.437 23:03:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.437 23:03:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.437 nr_hugepages=1024 00:03:44.437 23:03:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.437 resv_hugepages=0 00:03:44.437 23:03:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.437 surplus_hugepages=0 00:03:44.437 23:03:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.437 anon_hugepages=0 00:03:44.437 23:03:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.437 23:03:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.437 23:03:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.437 23:03:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.437 23:03:40 -- setup/common.sh@18 -- # local node= 00:03:44.437 23:03:40 -- setup/common.sh@19 -- # local var val 00:03:44.437 23:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.437 23:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.437 23:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.437 23:03:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.437 23:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.437 23:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43047260 kB' 'MemAvailable: 46766616 kB' 'Buffers: 8956 kB' 'Cached: 11205220 kB' 'SwapCached: 0 kB' 'Active: 7984184 kB' 'Inactive: 3688388 kB' 'Active(anon): 7567480 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 461600 kB' 'Mapped: 165772 kB' 'Shmem: 7109084 kB' 'KReclaimable: 222264 kB' 'Slab: 910780 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 688516 kB' 'KernelStack: 21856 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8830908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214416 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.437 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.437 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.438 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.438 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.438 23:03:40 -- setup/common.sh@33 -- # echo 1024 00:03:44.438 23:03:40 -- setup/common.sh@33 -- # return 0 00:03:44.439 23:03:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.439 23:03:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.439 23:03:40 -- setup/hugepages.sh@27 -- # local node 00:03:44.439 23:03:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.439 23:03:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.439 23:03:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.439 23:03:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.439 23:03:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.439 23:03:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.439 23:03:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.439 23:03:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.439 23:03:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.439 23:03:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.439 23:03:40 -- setup/common.sh@18 -- # local node=0 00:03:44.439 23:03:40 -- setup/common.sh@19 -- # local var val 00:03:44.439 23:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.439 23:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.439 23:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.439 23:03:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.439 23:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.439 23:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26530220 kB' 'MemUsed: 6055148 kB' 'SwapCached: 0 kB' 'Active: 2835048 kB' 'Inactive: 189036 kB' 'Active(anon): 2645376 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2859988 kB' 'Mapped: 77096 kB' 'AnonPages: 167276 kB' 'Shmem: 2481280 kB' 'KernelStack: 10888 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129516 kB' 'Slab: 480872 kB' 'SReclaimable: 129516 kB' 'SUnreclaim: 351356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.439 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.439 23:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@33 -- # echo 0 00:03:44.440 23:03:40 -- setup/common.sh@33 -- # return 0 00:03:44.440 23:03:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.440 23:03:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.440 23:03:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.440 23:03:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.440 23:03:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.440 23:03:40 -- setup/common.sh@18 -- # local node=1 00:03:44.440 23:03:40 -- setup/common.sh@19 -- # local var val 00:03:44.440 23:03:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.440 23:03:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.440 23:03:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.440 23:03:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.440 23:03:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.440 23:03:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698396 kB' 'MemFree: 16516624 kB' 'MemUsed: 11181772 kB' 'SwapCached: 0 kB' 'Active: 5149504 kB' 'Inactive: 3499352 kB' 'Active(anon): 4922472 kB' 'Inactive(anon): 0 kB' 'Active(file): 227032 kB' 'Inactive(file): 3499352 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354192 kB' 'Mapped: 88676 kB' 'AnonPages: 294732 kB' 'Shmem: 4627808 kB' 'KernelStack: 10984 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92748 kB' 'Slab: 429908 kB' 'SReclaimable: 92748 kB' 'SUnreclaim: 337160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.440 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.440 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # continue 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.441 23:03:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.441 23:03:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.441 23:03:40 -- setup/common.sh@33 -- # echo 0 00:03:44.441 23:03:40 -- setup/common.sh@33 -- # return 0 00:03:44.441 23:03:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.441 23:03:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.441 23:03:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.441 23:03:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.441 23:03:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.441 node0=512 expecting 512 00:03:44.441 23:03:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.441 23:03:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.441 23:03:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.441 23:03:40 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:44.441 node1=512 expecting 512 00:03:44.441 23:03:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.441 00:03:44.441 real 0m3.534s 00:03:44.441 user 0m1.381s 00:03:44.441 sys 0m2.223s 00:03:44.441 23:03:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:44.441 23:03:40 -- common/autotest_common.sh@10 -- # set +x 00:03:44.441 ************************************ 00:03:44.441 END TEST even_2G_alloc 00:03:44.441 ************************************ 00:03:44.441 23:03:41 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.441 23:03:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.441 23:03:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.441 23:03:41 -- common/autotest_common.sh@10 -- # set +x 00:03:44.701 ************************************ 00:03:44.701 START TEST odd_alloc 00:03:44.701 ************************************ 00:03:44.701 23:03:41 -- common/autotest_common.sh@1114 -- # odd_alloc 00:03:44.701 23:03:41 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.701 23:03:41 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.701 23:03:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.702 23:03:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.702 23:03:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.702 23:03:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.702 23:03:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.702 23:03:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.702 23:03:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.702 23:03:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.702 23:03:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:44.702 23:03:41 -- setup/hugepages.sh@83 -- # : 513 00:03:44.702 23:03:41 -- setup/hugepages.sh@84 -- # : 1 00:03:44.702 23:03:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:44.702 23:03:41 -- setup/hugepages.sh@83 -- # : 0 00:03:44.702 23:03:41 -- setup/hugepages.sh@84 -- # : 0 00:03:44.702 23:03:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.702 23:03:41 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.702 23:03:41 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.702 23:03:41 -- setup/hugepages.sh@160 -- # setup output 00:03:44.702 23:03:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.702 23:03:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:47.998 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.998 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.998 23:03:44 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:47.998 23:03:44 -- setup/hugepages.sh@89 -- # local node 00:03:47.998 23:03:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.998 23:03:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.998 23:03:44 -- setup/hugepages.sh@92 -- # local surp 00:03:47.998 23:03:44 -- setup/hugepages.sh@93 -- # local resv 00:03:47.998 23:03:44 -- setup/hugepages.sh@94 -- # local anon 00:03:47.998 23:03:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.998 23:03:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.998 23:03:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.998 23:03:44 -- setup/common.sh@18 -- # local node= 00:03:47.998 23:03:44 -- setup/common.sh@19 -- # local var val 00:03:47.998 23:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.998 23:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.998 23:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.998 23:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.998 23:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.998 23:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43061156 kB' 'MemAvailable: 46780512 kB' 'Buffers: 8956 kB' 'Cached: 11205308 kB' 'SwapCached: 0 kB' 'Active: 7987136 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570432 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464356 kB' 'Mapped: 166032 kB' 'Shmem: 7109172 kB' 'KReclaimable: 222264 kB' 'Slab: 910756 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 688492 kB' 'KernelStack: 22000 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8834864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.998 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.998 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.999 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.999 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.000 23:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.000 23:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.000 23:03:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.000 23:03:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.000 23:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.000 23:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.000 23:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.000 23:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.000 23:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.000 23:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.000 23:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.000 23:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.000 23:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43062964 kB' 'MemAvailable: 46782320 kB' 'Buffers: 8956 kB' 'Cached: 11205308 kB' 'SwapCached: 0 kB' 'Active: 7986136 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569432 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463648 kB' 'Mapped: 165940 kB' 'Shmem: 7109172 kB' 'KReclaimable: 222264 kB' 'Slab: 910716 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 688452 kB' 'KernelStack: 21920 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8836392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.000 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.000 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.001 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.001 23:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.001 23:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.001 23:03:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.001 23:03:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.001 23:03:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.001 23:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.001 23:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.001 23:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.001 23:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.001 23:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.001 23:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.001 23:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.001 23:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.001 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43062156 kB' 'MemAvailable: 46781512 kB' 'Buffers: 8956 kB' 'Cached: 11205324 kB' 'SwapCached: 0 kB' 'Active: 7986924 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570220 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464428 kB' 'Mapped: 165940 kB' 'Shmem: 7109188 kB' 'KReclaimable: 222264 kB' 'Slab: 910716 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 688452 kB' 'KernelStack: 22096 kB' 'PageTables: 7928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8836404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214464 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.002 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.002 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.265 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.265 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.266 23:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.266 23:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.266 23:03:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.266 23:03:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.266 nr_hugepages=1025 00:03:48.266 23:03:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.266 resv_hugepages=0 00:03:48.266 23:03:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.266 surplus_hugepages=0 00:03:48.266 23:03:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.266 anon_hugepages=0 00:03:48.266 23:03:44 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.266 23:03:44 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.266 23:03:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.266 23:03:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.266 23:03:44 -- setup/common.sh@18 -- # local node= 00:03:48.266 23:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.266 23:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.266 23:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.266 23:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.266 23:03:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.266 23:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.266 23:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43061860 kB' 'MemAvailable: 46781216 kB' 'Buffers: 8956 kB' 'Cached: 11205344 kB' 'SwapCached: 0 kB' 'Active: 7986716 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570012 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464168 kB' 'Mapped: 165940 kB' 'Shmem: 7109208 kB' 'KReclaimable: 222264 kB' 'Slab: 910716 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 688452 kB' 'KernelStack: 21872 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480884 kB' 'Committed_AS: 8836424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214448 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.266 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.266 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.267 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.267 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.268 23:03:44 -- setup/common.sh@33 -- # echo 1025 00:03:48.268 23:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.268 23:03:44 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.268 23:03:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.268 23:03:44 -- setup/hugepages.sh@27 -- # local node 00:03:48.268 23:03:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.268 23:03:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.268 23:03:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.268 23:03:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:48.268 23:03:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.268 23:03:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.268 23:03:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.268 23:03:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.268 23:03:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.268 23:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.268 23:03:44 -- setup/common.sh@18 -- # local node=0 00:03:48.268 23:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.268 23:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.268 23:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.268 23:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.268 23:03:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.268 23:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.268 23:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26546880 kB' 'MemUsed: 6038488 kB' 'SwapCached: 0 kB' 'Active: 2835824 kB' 'Inactive: 189036 kB' 'Active(anon): 2646152 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2860076 kB' 'Mapped: 77268 kB' 'AnonPages: 168028 kB' 'Shmem: 2481368 kB' 'KernelStack: 10904 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129516 kB' 'Slab: 480732 kB' 'SReclaimable: 129516 kB' 'SUnreclaim: 351216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.268 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.268 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.269 23:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.269 23:03:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.269 23:03:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.269 23:03:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.269 23:03:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.269 23:03:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.269 23:03:44 -- setup/common.sh@18 -- # local node=1 00:03:48.269 23:03:44 -- setup/common.sh@19 -- # local var val 00:03:48.269 23:03:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.269 23:03:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.269 23:03:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.269 23:03:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.269 23:03:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.269 23:03:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698396 kB' 'MemFree: 16513312 kB' 'MemUsed: 11185084 kB' 'SwapCached: 0 kB' 'Active: 5150848 kB' 'Inactive: 3499352 kB' 'Active(anon): 4923816 kB' 'Inactive(anon): 0 kB' 'Active(file): 227032 kB' 'Inactive(file): 3499352 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354236 kB' 'Mapped: 88672 kB' 'AnonPages: 296032 kB' 'Shmem: 4627852 kB' 'KernelStack: 11128 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92748 kB' 'Slab: 430116 kB' 'SReclaimable: 92748 kB' 'SUnreclaim: 337368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.269 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.269 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # continue 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.270 23:03:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.270 23:03:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.270 23:03:44 -- setup/common.sh@33 -- # echo 0 00:03:48.270 23:03:44 -- setup/common.sh@33 -- # return 0 00:03:48.270 23:03:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.270 23:03:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.270 23:03:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.270 23:03:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.270 23:03:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:48.270 node0=512 expecting 513 00:03:48.270 23:03:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.270 23:03:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.270 23:03:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.270 23:03:44 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:48.270 node1=513 expecting 512 00:03:48.270 23:03:44 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:48.270 00:03:48.270 real 0m3.687s 00:03:48.270 user 0m1.369s 00:03:48.270 sys 0m2.389s 00:03:48.270 23:03:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:48.270 23:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.270 ************************************ 00:03:48.270 END TEST odd_alloc 00:03:48.270 ************************************ 00:03:48.270 23:03:44 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.270 23:03:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.270 23:03:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.270 23:03:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.270 ************************************ 00:03:48.271 START TEST custom_alloc 00:03:48.271 ************************************ 00:03:48.271 23:03:44 -- common/autotest_common.sh@1114 -- # custom_alloc 00:03:48.271 23:03:44 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.271 23:03:44 -- setup/hugepages.sh@169 -- # local node 00:03:48.271 23:03:44 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.271 23:03:44 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.271 23:03:44 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.271 23:03:44 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.271 23:03:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.271 23:03:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.271 23:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.271 23:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.271 23:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.271 23:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.271 23:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.271 23:03:44 -- setup/hugepages.sh@83 -- # : 256 00:03:48.271 23:03:44 -- setup/hugepages.sh@84 -- # : 1 00:03:48.271 23:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.271 23:03:44 -- setup/hugepages.sh@83 -- # : 0 00:03:48.271 23:03:44 -- setup/hugepages.sh@84 -- # : 0 00:03:48.271 23:03:44 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.271 23:03:44 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:48.271 23:03:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.271 23:03:44 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.271 23:03:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.271 23:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.271 23:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.271 23:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.271 23:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.271 23:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.271 23:03:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.271 23:03:44 -- setup/hugepages.sh@78 -- # return 0 00:03:48.271 23:03:44 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:48.271 23:03:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.271 23:03:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.271 23:03:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.271 23:03:44 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.271 23:03:44 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.271 23:03:44 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.271 23:03:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.271 23:03:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.271 23:03:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.271 23:03:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.271 23:03:44 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:48.271 23:03:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.271 23:03:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.271 23:03:44 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.271 23:03:44 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:48.271 23:03:44 -- setup/hugepages.sh@78 -- # return 0 00:03:48.271 23:03:44 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:48.271 23:03:44 -- setup/hugepages.sh@187 -- # setup output 00:03:48.271 23:03:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.271 23:03:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:51.579 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.579 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:51.579 23:03:48 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:51.579 23:03:48 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:51.579 23:03:48 -- setup/hugepages.sh@89 -- # local node 00:03:51.579 23:03:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.579 23:03:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.579 23:03:48 -- setup/hugepages.sh@92 -- # local surp 00:03:51.579 23:03:48 -- setup/hugepages.sh@93 -- # local resv 00:03:51.579 23:03:48 -- setup/hugepages.sh@94 -- # local anon 00:03:51.579 23:03:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.579 23:03:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.579 23:03:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.579 23:03:48 -- setup/common.sh@18 -- # local node= 00:03:51.579 23:03:48 -- setup/common.sh@19 -- # local var val 00:03:51.579 23:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.579 23:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.579 23:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.579 23:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.579 23:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.579 23:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 42012364 kB' 'MemAvailable: 45731720 kB' 'Buffers: 8956 kB' 'Cached: 11205444 kB' 'SwapCached: 0 kB' 'Active: 7987308 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570604 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464616 kB' 'Mapped: 166028 kB' 'Shmem: 7109308 kB' 'KReclaimable: 222264 kB' 'Slab: 912128 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 689864 kB' 'KernelStack: 21840 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8832624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214464 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.579 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.579 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.580 23:03:48 -- setup/common.sh@33 -- # echo 0 00:03:51.580 23:03:48 -- setup/common.sh@33 -- # return 0 00:03:51.580 23:03:48 -- setup/hugepages.sh@97 -- # anon=0 00:03:51.580 23:03:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.580 23:03:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.580 23:03:48 -- setup/common.sh@18 -- # local node= 00:03:51.580 23:03:48 -- setup/common.sh@19 -- # local var val 00:03:51.580 23:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.580 23:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.580 23:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.580 23:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.580 23:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.580 23:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 42012664 kB' 'MemAvailable: 45732020 kB' 'Buffers: 8956 kB' 'Cached: 11205448 kB' 'SwapCached: 0 kB' 'Active: 7986640 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569936 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463988 kB' 'Mapped: 165928 kB' 'Shmem: 7109312 kB' 'KReclaimable: 222264 kB' 'Slab: 912092 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 689828 kB' 'KernelStack: 21888 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8832636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214432 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.580 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.580 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.581 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.581 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.581 23:03:48 -- setup/common.sh@33 -- # echo 0 00:03:51.581 23:03:48 -- setup/common.sh@33 -- # return 0 00:03:51.581 23:03:48 -- setup/hugepages.sh@99 -- # surp=0 00:03:51.581 23:03:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.581 23:03:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.581 23:03:48 -- setup/common.sh@18 -- # local node= 00:03:51.581 23:03:48 -- setup/common.sh@19 -- # local var val 00:03:51.581 23:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.581 23:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.581 23:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.581 23:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.581 23:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.581 23:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.582 23:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 42013336 kB' 'MemAvailable: 45732692 kB' 'Buffers: 8956 kB' 'Cached: 11205460 kB' 'SwapCached: 0 kB' 'Active: 7986684 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569980 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463992 kB' 'Mapped: 165928 kB' 'Shmem: 7109324 kB' 'KReclaimable: 222264 kB' 'Slab: 912092 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 689828 kB' 'KernelStack: 21888 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8832652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214432 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.582 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.582 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.583 23:03:48 -- setup/common.sh@33 -- # echo 0 00:03:51.583 23:03:48 -- setup/common.sh@33 -- # return 0 00:03:51.583 23:03:48 -- setup/hugepages.sh@100 -- # resv=0 00:03:51.583 23:03:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:51.583 nr_hugepages=1536 00:03:51.583 23:03:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.583 resv_hugepages=0 00:03:51.583 23:03:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.583 surplus_hugepages=0 00:03:51.583 23:03:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.583 anon_hugepages=0 00:03:51.583 23:03:48 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:51.583 23:03:48 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:51.583 23:03:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.583 23:03:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.583 23:03:48 -- setup/common.sh@18 -- # local node= 00:03:51.583 23:03:48 -- setup/common.sh@19 -- # local var val 00:03:51.583 23:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.583 23:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.583 23:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.583 23:03:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.583 23:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.583 23:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 42013588 kB' 'MemAvailable: 45732944 kB' 'Buffers: 8956 kB' 'Cached: 11205472 kB' 'SwapCached: 0 kB' 'Active: 7986660 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569956 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463972 kB' 'Mapped: 165928 kB' 'Shmem: 7109336 kB' 'KReclaimable: 222264 kB' 'Slab: 912092 kB' 'SReclaimable: 222264 kB' 'SUnreclaim: 689828 kB' 'KernelStack: 21888 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957620 kB' 'Committed_AS: 8832668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214432 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.583 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.583 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.584 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.584 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.584 23:03:48 -- setup/common.sh@33 -- # echo 1536 00:03:51.584 23:03:48 -- setup/common.sh@33 -- # return 0 00:03:51.584 23:03:48 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:51.584 23:03:48 -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.584 23:03:48 -- setup/hugepages.sh@27 -- # local node 00:03:51.584 23:03:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.584 23:03:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.584 23:03:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.585 23:03:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:51.585 23:03:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.585 23:03:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.585 23:03:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.585 23:03:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.585 23:03:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.585 23:03:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.585 23:03:48 -- setup/common.sh@18 -- # local node=0 00:03:51.585 23:03:48 -- setup/common.sh@19 -- # local var val 00:03:51.585 23:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.585 23:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.585 23:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.585 23:03:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.585 23:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.585 23:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 26548112 kB' 'MemUsed: 6037256 kB' 'SwapCached: 0 kB' 'Active: 2835436 kB' 'Inactive: 189036 kB' 'Active(anon): 2645764 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2860152 kB' 'Mapped: 77256 kB' 'AnonPages: 167540 kB' 'Shmem: 2481444 kB' 'KernelStack: 10920 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129516 kB' 'Slab: 481612 kB' 'SReclaimable: 129516 kB' 'SUnreclaim: 352096 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.846 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.846 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@33 -- # echo 0 00:03:51.847 23:03:48 -- setup/common.sh@33 -- # return 0 00:03:51.847 23:03:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.847 23:03:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.847 23:03:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.847 23:03:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.847 23:03:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.847 23:03:48 -- setup/common.sh@18 -- # local node=1 00:03:51.847 23:03:48 -- setup/common.sh@19 -- # local var val 00:03:51.847 23:03:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:51.847 23:03:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.847 23:03:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.847 23:03:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.847 23:03:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.847 23:03:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698396 kB' 'MemFree: 15465252 kB' 'MemUsed: 12233144 kB' 'SwapCached: 0 kB' 'Active: 5151488 kB' 'Inactive: 3499352 kB' 'Active(anon): 4924456 kB' 'Inactive(anon): 0 kB' 'Active(file): 227032 kB' 'Inactive(file): 3499352 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8354280 kB' 'Mapped: 88672 kB' 'AnonPages: 296768 kB' 'Shmem: 4627896 kB' 'KernelStack: 11000 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92748 kB' 'Slab: 430488 kB' 'SReclaimable: 92748 kB' 'SUnreclaim: 337740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.847 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.847 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # continue 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:51.848 23:03:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:51.848 23:03:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.848 23:03:48 -- setup/common.sh@33 -- # echo 0 00:03:51.848 23:03:48 -- setup/common.sh@33 -- # return 0 00:03:51.848 23:03:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.848 23:03:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.848 23:03:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.848 23:03:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.848 23:03:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:51.848 node0=512 expecting 512 00:03:51.848 23:03:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.848 23:03:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.848 23:03:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.848 23:03:48 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:51.848 node1=1024 expecting 1024 00:03:51.848 23:03:48 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:51.848 00:03:51.848 real 0m3.440s 00:03:51.848 user 0m1.305s 00:03:51.848 sys 0m2.198s 00:03:51.848 23:03:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:51.848 23:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:51.848 ************************************ 00:03:51.848 END TEST custom_alloc 00:03:51.848 ************************************ 00:03:51.848 23:03:48 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:51.848 23:03:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.848 23:03:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.848 23:03:48 -- common/autotest_common.sh@10 -- # set +x 00:03:51.848 ************************************ 00:03:51.848 START TEST no_shrink_alloc 00:03:51.848 ************************************ 00:03:51.848 23:03:48 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:03:51.848 23:03:48 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:51.848 23:03:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.848 23:03:48 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.848 23:03:48 -- setup/hugepages.sh@51 -- # shift 00:03:51.848 23:03:48 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.848 23:03:48 -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.848 23:03:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.848 23:03:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.848 23:03:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.848 23:03:48 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.848 23:03:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.848 23:03:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.848 23:03:48 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.848 23:03:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.848 23:03:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.848 23:03:48 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.848 23:03:48 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.848 23:03:48 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.848 23:03:48 -- setup/hugepages.sh@73 -- # return 0 00:03:51.848 23:03:48 -- setup/hugepages.sh@198 -- # setup output 00:03:51.848 23:03:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.848 23:03:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:55.163 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.163 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.163 23:03:51 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:55.163 23:03:51 -- setup/hugepages.sh@89 -- # local node 00:03:55.163 23:03:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.163 23:03:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.163 23:03:51 -- setup/hugepages.sh@92 -- # local surp 00:03:55.163 23:03:51 -- setup/hugepages.sh@93 -- # local resv 00:03:55.163 23:03:51 -- setup/hugepages.sh@94 -- # local anon 00:03:55.163 23:03:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.163 23:03:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.163 23:03:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.163 23:03:51 -- setup/common.sh@18 -- # local node= 00:03:55.163 23:03:51 -- setup/common.sh@19 -- # local var val 00:03:55.163 23:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.163 23:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.163 23:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.163 23:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.163 23:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.163 23:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43069992 kB' 'MemAvailable: 46789340 kB' 'Buffers: 8956 kB' 'Cached: 11205564 kB' 'SwapCached: 0 kB' 'Active: 7986336 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569632 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463652 kB' 'Mapped: 164872 kB' 'Shmem: 7109428 kB' 'KReclaimable: 222248 kB' 'Slab: 910988 kB' 'SReclaimable: 222248 kB' 'SUnreclaim: 688740 kB' 'KernelStack: 21808 kB' 'PageTables: 7484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8798892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214400 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.163 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.163 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.164 23:03:51 -- setup/common.sh@33 -- # echo 0 00:03:55.164 23:03:51 -- setup/common.sh@33 -- # return 0 00:03:55.164 23:03:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:55.164 23:03:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.164 23:03:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.164 23:03:51 -- setup/common.sh@18 -- # local node= 00:03:55.164 23:03:51 -- setup/common.sh@19 -- # local var val 00:03:55.164 23:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.164 23:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.164 23:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.164 23:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.164 23:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.164 23:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43074024 kB' 'MemAvailable: 46793372 kB' 'Buffers: 8956 kB' 'Cached: 11205568 kB' 'SwapCached: 0 kB' 'Active: 7986328 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569624 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463632 kB' 'Mapped: 164856 kB' 'Shmem: 7109432 kB' 'KReclaimable: 222248 kB' 'Slab: 910996 kB' 'SReclaimable: 222248 kB' 'SUnreclaim: 688748 kB' 'KernelStack: 21824 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8798904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.164 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.164 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.165 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.165 23:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.166 23:03:51 -- setup/common.sh@33 -- # echo 0 00:03:55.166 23:03:51 -- setup/common.sh@33 -- # return 0 00:03:55.166 23:03:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:55.166 23:03:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.166 23:03:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.166 23:03:51 -- setup/common.sh@18 -- # local node= 00:03:55.166 23:03:51 -- setup/common.sh@19 -- # local var val 00:03:55.166 23:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.166 23:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.166 23:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.166 23:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.166 23:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.166 23:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.166 23:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43075296 kB' 'MemAvailable: 46794644 kB' 'Buffers: 8956 kB' 'Cached: 11205580 kB' 'SwapCached: 0 kB' 'Active: 7986328 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569624 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463648 kB' 'Mapped: 164856 kB' 'Shmem: 7109444 kB' 'KReclaimable: 222248 kB' 'Slab: 911032 kB' 'SReclaimable: 222248 kB' 'SUnreclaim: 688784 kB' 'KernelStack: 21824 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8798916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.166 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.166 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.167 23:03:51 -- setup/common.sh@33 -- # echo 0 00:03:55.167 23:03:51 -- setup/common.sh@33 -- # return 0 00:03:55.167 23:03:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:55.167 23:03:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.167 nr_hugepages=1024 00:03:55.167 23:03:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.167 resv_hugepages=0 00:03:55.167 23:03:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.167 surplus_hugepages=0 00:03:55.167 23:03:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.167 anon_hugepages=0 00:03:55.167 23:03:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.167 23:03:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.167 23:03:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.167 23:03:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.167 23:03:51 -- setup/common.sh@18 -- # local node= 00:03:55.167 23:03:51 -- setup/common.sh@19 -- # local var val 00:03:55.167 23:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.167 23:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.167 23:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.167 23:03:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.167 23:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.167 23:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.167 23:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43075296 kB' 'MemAvailable: 46794644 kB' 'Buffers: 8956 kB' 'Cached: 11205608 kB' 'SwapCached: 0 kB' 'Active: 7985972 kB' 'Inactive: 3688388 kB' 'Active(anon): 7569268 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463240 kB' 'Mapped: 164856 kB' 'Shmem: 7109472 kB' 'KReclaimable: 222248 kB' 'Slab: 911032 kB' 'SReclaimable: 222248 kB' 'SUnreclaim: 688784 kB' 'KernelStack: 21808 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8798932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.167 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.167 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.168 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.168 23:03:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.169 23:03:51 -- setup/common.sh@33 -- # echo 1024 00:03:55.169 23:03:51 -- setup/common.sh@33 -- # return 0 00:03:55.169 23:03:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.169 23:03:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.169 23:03:51 -- setup/hugepages.sh@27 -- # local node 00:03:55.169 23:03:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.169 23:03:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.169 23:03:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.169 23:03:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:55.169 23:03:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.169 23:03:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.169 23:03:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.169 23:03:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.169 23:03:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.169 23:03:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.169 23:03:51 -- setup/common.sh@18 -- # local node=0 00:03:55.169 23:03:51 -- setup/common.sh@19 -- # local var val 00:03:55.169 23:03:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:55.169 23:03:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.169 23:03:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.169 23:03:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.169 23:03:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.169 23:03:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25512672 kB' 'MemUsed: 7072696 kB' 'SwapCached: 0 kB' 'Active: 2836492 kB' 'Inactive: 189036 kB' 'Active(anon): 2646820 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2860256 kB' 'Mapped: 76320 kB' 'AnonPages: 168660 kB' 'Shmem: 2481548 kB' 'KernelStack: 10920 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129508 kB' 'Slab: 480644 kB' 'SReclaimable: 129508 kB' 'SUnreclaim: 351136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.169 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.169 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # continue 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:55.170 23:03:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:55.170 23:03:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.170 23:03:51 -- setup/common.sh@33 -- # echo 0 00:03:55.170 23:03:51 -- setup/common.sh@33 -- # return 0 00:03:55.170 23:03:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.170 23:03:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.170 23:03:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.170 23:03:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.170 23:03:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.170 node0=1024 expecting 1024 00:03:55.170 23:03:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.170 23:03:51 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:55.170 23:03:51 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:55.170 23:03:51 -- setup/hugepages.sh@202 -- # setup output 00:03:55.170 23:03:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.170 23:03:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:03:58.469 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.469 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.469 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:58.469 23:03:55 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:58.469 23:03:55 -- setup/hugepages.sh@89 -- # local node 00:03:58.469 23:03:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.469 23:03:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.469 23:03:55 -- setup/hugepages.sh@92 -- # local surp 00:03:58.469 23:03:55 -- setup/hugepages.sh@93 -- # local resv 00:03:58.469 23:03:55 -- setup/hugepages.sh@94 -- # local anon 00:03:58.469 23:03:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.469 23:03:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.469 23:03:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.469 23:03:55 -- setup/common.sh@18 -- # local node= 00:03:58.469 23:03:55 -- setup/common.sh@19 -- # local var val 00:03:58.469 23:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.469 23:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.469 23:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.469 23:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.469 23:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.469 23:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43071440 kB' 'MemAvailable: 46790772 kB' 'Buffers: 8956 kB' 'Cached: 11205684 kB' 'SwapCached: 0 kB' 'Active: 7988144 kB' 'Inactive: 3688388 kB' 'Active(anon): 7571440 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 465164 kB' 'Mapped: 164860 kB' 'Shmem: 7109548 kB' 'KReclaimable: 222216 kB' 'Slab: 910712 kB' 'SReclaimable: 222216 kB' 'SUnreclaim: 688496 kB' 'KernelStack: 21856 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214384 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.469 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.469 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.470 23:03:55 -- setup/common.sh@33 -- # echo 0 00:03:58.470 23:03:55 -- setup/common.sh@33 -- # return 0 00:03:58.470 23:03:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:58.470 23:03:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.470 23:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.470 23:03:55 -- setup/common.sh@18 -- # local node= 00:03:58.470 23:03:55 -- setup/common.sh@19 -- # local var val 00:03:58.470 23:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.470 23:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.470 23:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.470 23:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.470 23:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.470 23:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43071300 kB' 'MemAvailable: 46791136 kB' 'Buffers: 8956 kB' 'Cached: 11205688 kB' 'SwapCached: 0 kB' 'Active: 7987884 kB' 'Inactive: 3688388 kB' 'Active(anon): 7571180 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464904 kB' 'Mapped: 164860 kB' 'Shmem: 7109552 kB' 'KReclaimable: 222216 kB' 'Slab: 910712 kB' 'SReclaimable: 222216 kB' 'SUnreclaim: 688496 kB' 'KernelStack: 21840 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.470 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.470 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.471 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.471 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.472 23:03:55 -- setup/common.sh@33 -- # echo 0 00:03:58.472 23:03:55 -- setup/common.sh@33 -- # return 0 00:03:58.472 23:03:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:58.472 23:03:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.472 23:03:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.472 23:03:55 -- setup/common.sh@18 -- # local node= 00:03:58.472 23:03:55 -- setup/common.sh@19 -- # local var val 00:03:58.472 23:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.472 23:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.472 23:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.472 23:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.472 23:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.472 23:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43072376 kB' 'MemAvailable: 46791708 kB' 'Buffers: 8956 kB' 'Cached: 11205688 kB' 'SwapCached: 0 kB' 'Active: 7987860 kB' 'Inactive: 3688388 kB' 'Active(anon): 7571156 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464880 kB' 'Mapped: 164860 kB' 'Shmem: 7109552 kB' 'KReclaimable: 222216 kB' 'Slab: 910724 kB' 'SReclaimable: 222216 kB' 'SUnreclaim: 688508 kB' 'KernelStack: 21840 kB' 'PageTables: 7576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.472 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.472 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.734 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.734 23:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.735 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.735 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.735 23:03:55 -- setup/common.sh@33 -- # echo 0 00:03:58.735 23:03:55 -- setup/common.sh@33 -- # return 0 00:03:58.735 23:03:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:58.735 23:03:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.735 nr_hugepages=1024 00:03:58.735 23:03:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.735 resv_hugepages=0 00:03:58.735 23:03:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.735 surplus_hugepages=0 00:03:58.735 23:03:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.735 anon_hugepages=0 00:03:58.735 23:03:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.735 23:03:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.735 23:03:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.735 23:03:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.735 23:03:55 -- setup/common.sh@18 -- # local node= 00:03:58.735 23:03:55 -- setup/common.sh@19 -- # local var val 00:03:58.735 23:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.735 23:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.735 23:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.736 23:03:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.736 23:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.736 23:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283764 kB' 'MemFree: 43072696 kB' 'MemAvailable: 46792028 kB' 'Buffers: 8956 kB' 'Cached: 11205728 kB' 'SwapCached: 0 kB' 'Active: 7987508 kB' 'Inactive: 3688388 kB' 'Active(anon): 7570804 kB' 'Inactive(anon): 0 kB' 'Active(file): 416704 kB' 'Inactive(file): 3688388 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464476 kB' 'Mapped: 164860 kB' 'Shmem: 7109592 kB' 'KReclaimable: 222216 kB' 'Slab: 910724 kB' 'SReclaimable: 222216 kB' 'SUnreclaim: 688508 kB' 'KernelStack: 21824 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481908 kB' 'Committed_AS: 8799568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 214368 kB' 'VmallocChunk: 0 kB' 'Percpu: 74368 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 519540 kB' 'DirectMap2M: 11749376 kB' 'DirectMap1G: 57671680 kB' 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.736 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.736 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.737 23:03:55 -- setup/common.sh@33 -- # echo 1024 00:03:58.737 23:03:55 -- setup/common.sh@33 -- # return 0 00:03:58.737 23:03:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.737 23:03:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.737 23:03:55 -- setup/hugepages.sh@27 -- # local node 00:03:58.737 23:03:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.737 23:03:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.737 23:03:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.737 23:03:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.737 23:03:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.737 23:03:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.737 23:03:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.737 23:03:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.737 23:03:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.737 23:03:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.737 23:03:55 -- setup/common.sh@18 -- # local node=0 00:03:58.737 23:03:55 -- setup/common.sh@19 -- # local var val 00:03:58.737 23:03:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:58.737 23:03:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.737 23:03:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.737 23:03:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.737 23:03:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.737 23:03:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 25497816 kB' 'MemUsed: 7087552 kB' 'SwapCached: 0 kB' 'Active: 2837524 kB' 'Inactive: 189036 kB' 'Active(anon): 2647852 kB' 'Inactive(anon): 0 kB' 'Active(file): 189672 kB' 'Inactive(file): 189036 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2860340 kB' 'Mapped: 76324 kB' 'AnonPages: 169396 kB' 'Shmem: 2481632 kB' 'KernelStack: 10936 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 129508 kB' 'Slab: 480684 kB' 'SReclaimable: 129508 kB' 'SUnreclaim: 351176 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.737 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.737 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # continue 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:58.738 23:03:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:58.738 23:03:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.738 23:03:55 -- setup/common.sh@33 -- # echo 0 00:03:58.738 23:03:55 -- setup/common.sh@33 -- # return 0 00:03:58.738 23:03:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.738 23:03:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.738 23:03:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.738 23:03:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.738 23:03:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.738 node0=1024 expecting 1024 00:03:58.738 23:03:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.738 00:03:58.738 real 0m6.909s 00:03:58.738 user 0m2.550s 00:03:58.738 sys 0m4.436s 00:03:58.738 23:03:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.738 23:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:58.738 ************************************ 00:03:58.738 END TEST no_shrink_alloc 00:03:58.738 ************************************ 00:03:58.738 23:03:55 -- setup/hugepages.sh@217 -- # clear_hp 00:03:58.738 23:03:55 -- setup/hugepages.sh@37 -- # local node hp 00:03:58.738 23:03:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.738 23:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.738 23:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:58.738 23:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.738 23:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:58.738 23:03:55 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:58.738 23:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.738 23:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:58.738 23:03:55 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:58.738 23:03:55 -- setup/hugepages.sh@41 -- # echo 0 00:03:58.738 23:03:55 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:58.738 23:03:55 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:58.738 00:03:58.738 real 0m26.458s 00:03:58.738 user 0m9.416s 00:03:58.738 sys 0m15.816s 00:03:58.738 23:03:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:03:58.738 23:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:58.738 ************************************ 00:03:58.738 END TEST hugepages 00:03:58.738 ************************************ 00:03:58.738 23:03:55 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:58.738 23:03:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.738 23:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.738 23:03:55 -- common/autotest_common.sh@10 -- # set +x 00:03:58.738 ************************************ 00:03:58.738 START TEST driver 00:03:58.738 ************************************ 00:03:58.738 23:03:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:03:58.999 * Looking for test storage... 00:03:58.999 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:03:58.999 23:03:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:58.999 23:03:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:58.999 23:03:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:58.999 23:03:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:58.999 23:03:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:58.999 23:03:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:58.999 23:03:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:58.999 23:03:55 -- scripts/common.sh@335 -- # IFS=.-: 00:03:58.999 23:03:55 -- scripts/common.sh@335 -- # read -ra ver1 00:03:58.999 23:03:55 -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.999 23:03:55 -- scripts/common.sh@336 -- # read -ra ver2 00:03:58.999 23:03:55 -- scripts/common.sh@337 -- # local 'op=<' 00:03:58.999 23:03:55 -- scripts/common.sh@339 -- # ver1_l=2 00:03:58.999 23:03:55 -- scripts/common.sh@340 -- # ver2_l=1 00:03:58.999 23:03:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:58.999 23:03:55 -- scripts/common.sh@343 -- # case "$op" in 00:03:58.999 23:03:55 -- scripts/common.sh@344 -- # : 1 00:03:58.999 23:03:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:58.999 23:03:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.999 23:03:55 -- scripts/common.sh@364 -- # decimal 1 00:03:58.999 23:03:55 -- scripts/common.sh@352 -- # local d=1 00:03:58.999 23:03:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.999 23:03:55 -- scripts/common.sh@354 -- # echo 1 00:03:58.999 23:03:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:58.999 23:03:55 -- scripts/common.sh@365 -- # decimal 2 00:03:58.999 23:03:55 -- scripts/common.sh@352 -- # local d=2 00:03:58.999 23:03:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.999 23:03:55 -- scripts/common.sh@354 -- # echo 2 00:03:58.999 23:03:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:58.999 23:03:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:58.999 23:03:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:58.999 23:03:55 -- scripts/common.sh@367 -- # return 0 00:03:58.999 23:03:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.999 23:03:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:58.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.999 --rc genhtml_branch_coverage=1 00:03:58.999 --rc genhtml_function_coverage=1 00:03:58.999 --rc genhtml_legend=1 00:03:58.999 --rc geninfo_all_blocks=1 00:03:58.999 --rc geninfo_unexecuted_blocks=1 00:03:58.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.999 ' 00:03:58.999 23:03:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:58.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.999 --rc genhtml_branch_coverage=1 00:03:58.999 --rc genhtml_function_coverage=1 00:03:58.999 --rc genhtml_legend=1 00:03:58.999 --rc geninfo_all_blocks=1 00:03:58.999 --rc geninfo_unexecuted_blocks=1 00:03:58.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.999 ' 00:03:58.999 23:03:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:58.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.999 --rc genhtml_branch_coverage=1 00:03:58.999 --rc genhtml_function_coverage=1 00:03:58.999 --rc genhtml_legend=1 00:03:58.999 --rc geninfo_all_blocks=1 00:03:58.999 --rc geninfo_unexecuted_blocks=1 00:03:58.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.999 ' 00:03:58.999 23:03:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:58.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.999 --rc genhtml_branch_coverage=1 00:03:58.999 --rc genhtml_function_coverage=1 00:03:58.999 --rc genhtml_legend=1 00:03:58.999 --rc geninfo_all_blocks=1 00:03:58.999 --rc geninfo_unexecuted_blocks=1 00:03:58.999 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:03:58.999 ' 00:03:58.999 23:03:55 -- setup/driver.sh@68 -- # setup reset 00:03:58.999 23:03:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.999 23:03:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.281 23:04:00 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:04.281 23:04:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.281 23:04:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.281 23:04:00 -- common/autotest_common.sh@10 -- # set +x 00:04:04.281 ************************************ 00:04:04.281 START TEST guess_driver 00:04:04.281 ************************************ 00:04:04.281 23:04:00 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:04.281 23:04:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:04.281 23:04:00 -- setup/driver.sh@47 -- # local fail=0 00:04:04.281 23:04:00 -- setup/driver.sh@49 -- # pick_driver 00:04:04.281 23:04:00 -- setup/driver.sh@36 -- # vfio 00:04:04.281 23:04:00 -- setup/driver.sh@21 -- # local iommu_grups 00:04:04.281 23:04:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:04.281 23:04:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:04.281 23:04:00 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:04.281 23:04:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:04.281 23:04:00 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:04.281 23:04:00 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:04.281 23:04:00 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:04.281 23:04:00 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:04.281 23:04:00 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:04.281 23:04:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:04.281 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:04.281 23:04:00 -- setup/driver.sh@30 -- # return 0 00:04:04.281 23:04:00 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:04.281 23:04:00 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:04.281 23:04:00 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:04.281 23:04:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:04.281 Looking for driver=vfio-pci 00:04:04.281 23:04:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:04.281 23:04:00 -- setup/driver.sh@45 -- # setup output config 00:04:04.281 23:04:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.281 23:04:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:06.819 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.819 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.819 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.819 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.819 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.819 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.820 23:04:03 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.820 23:04:03 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.820 23:04:03 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.729 23:04:04 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.729 23:04:04 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:08.729 23:04:04 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.729 23:04:04 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:08.729 23:04:04 -- setup/driver.sh@65 -- # setup reset 00:04:08.729 23:04:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.729 23:04:04 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.010 00:04:14.010 real 0m9.531s 00:04:14.010 user 0m2.355s 00:04:14.010 sys 0m4.910s 00:04:14.010 23:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.010 23:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 END TEST guess_driver 00:04:14.010 ************************************ 00:04:14.010 00:04:14.010 real 0m14.495s 00:04:14.010 user 0m3.792s 00:04:14.010 sys 0m7.680s 00:04:14.010 23:04:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:14.010 23:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 END TEST driver 00:04:14.010 ************************************ 00:04:14.010 23:04:09 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:14.010 23:04:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.010 23:04:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.010 23:04:09 -- common/autotest_common.sh@10 -- # set +x 00:04:14.010 ************************************ 00:04:14.010 START TEST devices 00:04:14.010 ************************************ 00:04:14.010 23:04:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:04:14.010 * Looking for test storage... 00:04:14.010 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:04:14.010 23:04:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:14.010 23:04:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:14.010 23:04:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:14.010 23:04:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:14.010 23:04:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:14.010 23:04:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:14.010 23:04:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:14.010 23:04:09 -- scripts/common.sh@335 -- # IFS=.-: 00:04:14.010 23:04:09 -- scripts/common.sh@335 -- # read -ra ver1 00:04:14.010 23:04:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.010 23:04:09 -- scripts/common.sh@336 -- # read -ra ver2 00:04:14.010 23:04:09 -- scripts/common.sh@337 -- # local 'op=<' 00:04:14.010 23:04:09 -- scripts/common.sh@339 -- # ver1_l=2 00:04:14.010 23:04:09 -- scripts/common.sh@340 -- # ver2_l=1 00:04:14.010 23:04:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:14.010 23:04:09 -- scripts/common.sh@343 -- # case "$op" in 00:04:14.010 23:04:09 -- scripts/common.sh@344 -- # : 1 00:04:14.010 23:04:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:14.010 23:04:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.010 23:04:09 -- scripts/common.sh@364 -- # decimal 1 00:04:14.010 23:04:09 -- scripts/common.sh@352 -- # local d=1 00:04:14.010 23:04:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.010 23:04:09 -- scripts/common.sh@354 -- # echo 1 00:04:14.010 23:04:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:14.010 23:04:09 -- scripts/common.sh@365 -- # decimal 2 00:04:14.010 23:04:09 -- scripts/common.sh@352 -- # local d=2 00:04:14.010 23:04:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.010 23:04:10 -- scripts/common.sh@354 -- # echo 2 00:04:14.010 23:04:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:14.010 23:04:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:14.010 23:04:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:14.010 23:04:10 -- scripts/common.sh@367 -- # return 0 00:04:14.010 23:04:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.010 23:04:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.010 ' 00:04:14.010 23:04:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.010 ' 00:04:14.010 23:04:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.010 ' 00:04:14.010 23:04:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:14.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.010 --rc genhtml_branch_coverage=1 00:04:14.010 --rc genhtml_function_coverage=1 00:04:14.010 --rc genhtml_legend=1 00:04:14.010 --rc geninfo_all_blocks=1 00:04:14.010 --rc geninfo_unexecuted_blocks=1 00:04:14.010 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:14.010 ' 00:04:14.010 23:04:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:14.010 23:04:10 -- setup/devices.sh@192 -- # setup reset 00:04:14.010 23:04:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.010 23:04:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.430 23:04:13 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:17.430 23:04:13 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:17.430 23:04:13 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:17.430 23:04:13 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:17.430 23:04:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:17.430 23:04:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:17.430 23:04:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:17.430 23:04:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.430 23:04:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:17.430 23:04:13 -- setup/devices.sh@196 -- # blocks=() 00:04:17.430 23:04:13 -- setup/devices.sh@196 -- # declare -a blocks 00:04:17.430 23:04:13 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:17.430 23:04:13 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:17.430 23:04:13 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:17.430 23:04:13 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.430 23:04:13 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:17.430 23:04:13 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.430 23:04:13 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:17.430 23:04:13 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:17.430 23:04:13 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:17.430 23:04:13 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:17.430 23:04:13 -- scripts/common.sh@389 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:17.430 No valid GPT data, bailing 00:04:17.430 23:04:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.430 23:04:13 -- scripts/common.sh@393 -- # pt= 00:04:17.430 23:04:13 -- scripts/common.sh@394 -- # return 1 00:04:17.430 23:04:13 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:17.430 23:04:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:17.430 23:04:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:17.430 23:04:13 -- setup/common.sh@80 -- # echo 1600321314816 00:04:17.430 23:04:13 -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:17.430 23:04:13 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.430 23:04:13 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:17.430 23:04:13 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:17.430 23:04:13 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:17.430 23:04:13 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:17.430 23:04:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.430 23:04:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.430 23:04:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.430 ************************************ 00:04:17.430 START TEST nvme_mount 00:04:17.430 ************************************ 00:04:17.430 23:04:13 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:17.430 23:04:13 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:17.430 23:04:13 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:17.430 23:04:13 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.430 23:04:13 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.430 23:04:13 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:17.430 23:04:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.430 23:04:13 -- setup/common.sh@40 -- # local part_no=1 00:04:17.430 23:04:13 -- setup/common.sh@41 -- # local size=1073741824 00:04:17.430 23:04:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.430 23:04:13 -- setup/common.sh@44 -- # parts=() 00:04:17.430 23:04:13 -- setup/common.sh@44 -- # local parts 00:04:17.430 23:04:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.430 23:04:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.430 23:04:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.430 23:04:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:17.430 23:04:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.430 23:04:13 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:17.430 23:04:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.430 23:04:13 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:18.368 Creating new GPT entries in memory. 00:04:18.368 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.368 other utilities. 00:04:18.368 23:04:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.368 23:04:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.368 23:04:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.368 23:04:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.368 23:04:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:19.306 Creating new GPT entries in memory. 00:04:19.306 The operation has completed successfully. 00:04:19.306 23:04:15 -- setup/common.sh@57 -- # (( part++ )) 00:04:19.306 23:04:15 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:19.306 23:04:15 -- setup/common.sh@62 -- # wait 1256063 00:04:19.306 23:04:15 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.306 23:04:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:19.306 23:04:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.306 23:04:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:19.306 23:04:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:19.566 23:04:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.566 23:04:15 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.566 23:04:15 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:19.566 23:04:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:19.566 23:04:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:19.566 23:04:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:19.566 23:04:15 -- setup/devices.sh@53 -- # local found=0 00:04:19.566 23:04:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:19.566 23:04:15 -- setup/devices.sh@56 -- # : 00:04:19.566 23:04:15 -- setup/devices.sh@59 -- # local pci status 00:04:19.566 23:04:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.566 23:04:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:19.566 23:04:15 -- setup/devices.sh@47 -- # setup output config 00:04:19.566 23:04:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.566 23:04:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:22.860 23:04:18 -- setup/devices.sh@63 -- # found=1 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:22.860 23:04:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.860 23:04:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.860 23:04:19 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:22.860 23:04:19 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.860 23:04:19 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.860 23:04:19 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.860 23:04:19 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:22.860 23:04:19 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.860 23:04:19 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.860 23:04:19 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.860 23:04:19 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.860 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.860 23:04:19 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.860 23:04:19 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.860 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.860 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.860 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.860 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.860 23:04:19 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:22.860 23:04:19 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:22.860 23:04:19 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.860 23:04:19 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:22.860 23:04:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:22.860 23:04:19 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.120 23:04:19 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.120 23:04:19 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:23.120 23:04:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:23.120 23:04:19 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.120 23:04:19 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:23.120 23:04:19 -- setup/devices.sh@53 -- # local found=0 00:04:23.120 23:04:19 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:23.120 23:04:19 -- setup/devices.sh@56 -- # : 00:04:23.120 23:04:19 -- setup/devices.sh@59 -- # local pci status 00:04:23.120 23:04:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.120 23:04:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:23.120 23:04:19 -- setup/devices.sh@47 -- # setup output config 00:04:23.120 23:04:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.120 23:04:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:26.415 23:04:22 -- setup/devices.sh@63 -- # found=1 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:26.415 23:04:22 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:26.415 23:04:22 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.415 23:04:22 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.415 23:04:22 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.415 23:04:22 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.415 23:04:22 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:26.415 23:04:22 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:26.415 23:04:22 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:26.415 23:04:22 -- setup/devices.sh@50 -- # local mount_point= 00:04:26.415 23:04:22 -- setup/devices.sh@51 -- # local test_file= 00:04:26.415 23:04:22 -- setup/devices.sh@53 -- # local found=0 00:04:26.415 23:04:22 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:26.415 23:04:22 -- setup/devices.sh@59 -- # local pci status 00:04:26.415 23:04:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.415 23:04:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:26.415 23:04:22 -- setup/devices.sh@47 -- # setup output config 00:04:26.415 23:04:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.415 23:04:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:29.710 23:04:25 -- setup/devices.sh@63 -- # found=1 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.710 23:04:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.710 23:04:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.710 23:04:26 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:29.710 23:04:26 -- setup/devices.sh@68 -- # return 0 00:04:29.710 23:04:26 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:29.710 23:04:26 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.710 23:04:26 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.710 23:04:26 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.710 23:04:26 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.710 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.710 00:04:29.710 real 0m12.228s 00:04:29.710 user 0m3.612s 00:04:29.710 sys 0m6.563s 00:04:29.710 23:04:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:29.710 23:04:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.710 ************************************ 00:04:29.710 END TEST nvme_mount 00:04:29.710 ************************************ 00:04:29.710 23:04:26 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:29.710 23:04:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.710 23:04:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.710 23:04:26 -- common/autotest_common.sh@10 -- # set +x 00:04:29.710 ************************************ 00:04:29.710 START TEST dm_mount 00:04:29.710 ************************************ 00:04:29.710 23:04:26 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:29.710 23:04:26 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:29.710 23:04:26 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:29.710 23:04:26 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:29.710 23:04:26 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:29.710 23:04:26 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.710 23:04:26 -- setup/common.sh@40 -- # local part_no=2 00:04:29.710 23:04:26 -- setup/common.sh@41 -- # local size=1073741824 00:04:29.710 23:04:26 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.710 23:04:26 -- setup/common.sh@44 -- # parts=() 00:04:29.710 23:04:26 -- setup/common.sh@44 -- # local parts 00:04:29.710 23:04:26 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.710 23:04:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.710 23:04:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.710 23:04:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.710 23:04:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.710 23:04:26 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.710 23:04:26 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.710 23:04:26 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.710 23:04:26 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:29.710 23:04:26 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.710 23:04:26 -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:30.649 Creating new GPT entries in memory. 00:04:30.649 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.649 other utilities. 00:04:30.649 23:04:27 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.649 23:04:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.649 23:04:27 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.649 23:04:27 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.649 23:04:27 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:31.588 Creating new GPT entries in memory. 00:04:31.588 The operation has completed successfully. 00:04:31.588 23:04:28 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.588 23:04:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.588 23:04:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.588 23:04:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.588 23:04:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:32.526 The operation has completed successfully. 00:04:32.526 23:04:29 -- setup/common.sh@57 -- # (( part++ )) 00:04:32.526 23:04:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.526 23:04:29 -- setup/common.sh@62 -- # wait 1260565 00:04:32.785 23:04:29 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:32.785 23:04:29 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.785 23:04:29 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.785 23:04:29 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:32.785 23:04:29 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:32.785 23:04:29 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.785 23:04:29 -- setup/devices.sh@161 -- # break 00:04:32.785 23:04:29 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.785 23:04:29 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:32.785 23:04:29 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:32.785 23:04:29 -- setup/devices.sh@166 -- # dm=dm-0 00:04:32.785 23:04:29 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:32.785 23:04:29 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:32.785 23:04:29 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.785 23:04:29 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:04:32.785 23:04:29 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.785 23:04:29 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:32.785 23:04:29 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:32.785 23:04:29 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.785 23:04:29 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.785 23:04:29 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:32.785 23:04:29 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:32.785 23:04:29 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:32.785 23:04:29 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.785 23:04:29 -- setup/devices.sh@53 -- # local found=0 00:04:32.785 23:04:29 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.785 23:04:29 -- setup/devices.sh@56 -- # : 00:04:32.785 23:04:29 -- setup/devices.sh@59 -- # local pci status 00:04:32.785 23:04:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.785 23:04:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:32.785 23:04:29 -- setup/devices.sh@47 -- # setup output config 00:04:32.785 23:04:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.785 23:04:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:36.079 23:04:32 -- setup/devices.sh@63 -- # found=1 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.079 23:04:32 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:36.079 23:04:32 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:36.079 23:04:32 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.079 23:04:32 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.079 23:04:32 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:36.079 23:04:32 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:36.079 23:04:32 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:36.079 23:04:32 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:36.079 23:04:32 -- setup/devices.sh@50 -- # local mount_point= 00:04:36.079 23:04:32 -- setup/devices.sh@51 -- # local test_file= 00:04:36.079 23:04:32 -- setup/devices.sh@53 -- # local found=0 00:04:36.079 23:04:32 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:36.079 23:04:32 -- setup/devices.sh@59 -- # local pci status 00:04:36.079 23:04:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.079 23:04:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:36.079 23:04:32 -- setup/devices.sh@47 -- # setup output config 00:04:36.079 23:04:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.079 23:04:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:39.538 23:04:35 -- setup/devices.sh@63 -- # found=1 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.538 23:04:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.538 23:04:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:39.538 23:04:35 -- setup/devices.sh@68 -- # return 0 00:04:39.538 23:04:35 -- setup/devices.sh@187 -- # cleanup_dm 00:04:39.538 23:04:35 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.538 23:04:35 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.538 23:04:35 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:39.538 23:04:35 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:39.538 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.538 23:04:35 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:39.538 00:04:39.538 real 0m9.780s 00:04:39.538 user 0m2.358s 00:04:39.538 sys 0m4.497s 00:04:39.538 23:04:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.538 23:04:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.538 ************************************ 00:04:39.538 END TEST dm_mount 00:04:39.538 ************************************ 00:04:39.538 23:04:35 -- setup/devices.sh@1 -- # cleanup 00:04:39.538 23:04:35 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:39.538 23:04:35 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.538 23:04:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.538 23:04:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.538 23:04:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.798 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.798 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.798 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.798 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.798 23:04:36 -- setup/devices.sh@12 -- # cleanup_dm 00:04:39.798 23:04:36 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:04:39.798 23:04:36 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:39.798 23:04:36 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.798 23:04:36 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:39.798 23:04:36 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.798 23:04:36 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:39.798 00:04:39.798 real 0m26.360s 00:04:39.798 user 0m7.435s 00:04:39.798 sys 0m13.838s 00:04:39.798 23:04:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.798 23:04:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.798 ************************************ 00:04:39.798 END TEST devices 00:04:39.798 ************************************ 00:04:39.798 00:04:39.798 real 1m31.732s 00:04:39.798 user 0m28.360s 00:04:39.798 sys 0m52.157s 00:04:39.798 23:04:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:39.798 23:04:36 -- common/autotest_common.sh@10 -- # set +x 00:04:39.798 ************************************ 00:04:39.798 END TEST setup.sh 00:04:39.798 ************************************ 00:04:39.798 23:04:36 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:04:43.091 Hugepages 00:04:43.091 node hugesize free / total 00:04:43.091 node0 1048576kB 0 / 0 00:04:43.091 node0 2048kB 2048 / 2048 00:04:43.091 node1 1048576kB 0 / 0 00:04:43.091 node1 2048kB 0 / 0 00:04:43.091 00:04:43.091 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.091 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:43.091 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:43.091 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:43.091 23:04:39 -- spdk/autotest.sh@128 -- # uname -s 00:04:43.091 23:04:39 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:43.091 23:04:39 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:43.091 23:04:39 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:46.385 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.385 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.766 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.026 23:04:44 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:48.966 23:04:45 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:48.966 23:04:45 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:48.966 23:04:45 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:48.966 23:04:45 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:48.966 23:04:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:48.966 23:04:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:48.966 23:04:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.966 23:04:45 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.966 23:04:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:48.966 23:04:45 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:48.966 23:04:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:04:48.966 23:04:45 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.261 Waiting for block devices as requested 00:04:52.261 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.521 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.521 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.521 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.785 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:52.785 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:52.785 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:52.785 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.049 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.049 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.049 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.308 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.308 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.308 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.568 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.568 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.568 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.828 23:04:50 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:53.828 23:04:50 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:04:53.828 23:04:50 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:53.828 23:04:50 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:53.828 23:04:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:53.828 23:04:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.828 23:04:50 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:04:53.828 23:04:50 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:53.828 23:04:50 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:53.828 23:04:50 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:53.828 23:04:50 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:53.828 23:04:50 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:53.828 23:04:50 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:53.828 23:04:50 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:53.828 23:04:50 -- common/autotest_common.sh@1552 -- # continue 00:04:53.828 23:04:50 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:53.828 23:04:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:53.828 23:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:53.828 23:04:50 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:53.828 23:04:50 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:53.828 23:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:53.828 23:04:50 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:04:57.123 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.123 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.032 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.032 23:04:55 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:59.032 23:04:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:59.032 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:59.032 23:04:55 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:59.032 23:04:55 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:59.032 23:04:55 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:59.032 23:04:55 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:59.032 23:04:55 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:59.032 23:04:55 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:59.033 23:04:55 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:59.033 23:04:55 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:59.033 23:04:55 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:59.033 23:04:55 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:59.033 23:04:55 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:59.033 23:04:55 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:59.033 23:04:55 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:04:59.033 23:04:55 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:59.033 23:04:55 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:59.033 23:04:55 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:04:59.033 23:04:55 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:59.033 23:04:55 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:04:59.033 23:04:55 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:04:59.033 23:04:55 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:04:59.033 23:04:55 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1270481 00:04:59.033 23:04:55 -- common/autotest_common.sh@1593 -- # waitforlisten 1270481 00:04:59.033 23:04:55 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:04:59.033 23:04:55 -- common/autotest_common.sh@829 -- # '[' -z 1270481 ']' 00:04:59.033 23:04:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.033 23:04:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.033 23:04:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.033 23:04:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.033 23:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:59.033 [2024-11-17 23:04:55.486743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:59.033 [2024-11-17 23:04:55.486821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270481 ] 00:04:59.033 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.033 [2024-11-17 23:04:55.557878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.033 [2024-11-17 23:04:55.635514] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:59.033 [2024-11-17 23:04:55.635624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.972 23:04:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.972 23:04:56 -- common/autotest_common.sh@862 -- # return 0 00:04:59.972 23:04:56 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:04:59.972 23:04:56 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:04:59.972 23:04:56 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:03.263 nvme0n1 00:05:03.263 23:04:59 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:03.263 [2024-11-17 23:04:59.490494] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:03.263 request: 00:05:03.263 { 00:05:03.263 "nvme_ctrlr_name": "nvme0", 00:05:03.263 "password": "test", 00:05:03.263 "method": "bdev_nvme_opal_revert", 00:05:03.263 "req_id": 1 00:05:03.263 } 00:05:03.263 Got JSON-RPC error response 00:05:03.263 response: 00:05:03.263 { 00:05:03.263 "code": -32602, 00:05:03.263 "message": "Invalid parameters" 00:05:03.263 } 00:05:03.263 23:04:59 -- common/autotest_common.sh@1599 -- # true 00:05:03.263 23:04:59 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:03.263 23:04:59 -- common/autotest_common.sh@1603 -- # killprocess 1270481 00:05:03.263 23:04:59 -- common/autotest_common.sh@936 -- # '[' -z 1270481 ']' 00:05:03.263 23:04:59 -- common/autotest_common.sh@940 -- # kill -0 1270481 00:05:03.263 23:04:59 -- common/autotest_common.sh@941 -- # uname 00:05:03.263 23:04:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:03.263 23:04:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1270481 00:05:03.263 23:04:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:03.263 23:04:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:03.263 23:04:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1270481' 00:05:03.263 killing process with pid 1270481 00:05:03.263 23:04:59 -- common/autotest_common.sh@955 -- # kill 1270481 00:05:03.263 23:04:59 -- common/autotest_common.sh@960 -- # wait 1270481 00:05:05.803 23:05:01 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:05:05.803 23:05:01 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:05:05.803 23:05:01 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.803 23:05:01 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:05:05.803 23:05:01 -- spdk/autotest.sh@160 -- # timing_enter lib 00:05:05.803 23:05:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:05.803 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 23:05:01 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:05.803 23:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.803 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 ************************************ 00:05:05.803 START TEST env 00:05:05.803 ************************************ 00:05:05.803 23:05:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:05:05.803 * Looking for test storage... 00:05:05.803 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:05:05.803 23:05:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.803 23:05:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.803 23:05:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.803 23:05:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.803 23:05:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.803 23:05:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.803 23:05:01 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.803 23:05:01 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.803 23:05:01 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.803 23:05:01 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.803 23:05:01 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.803 23:05:01 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.803 23:05:01 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.803 23:05:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.803 23:05:01 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.803 23:05:01 -- scripts/common.sh@344 -- # : 1 00:05:05.803 23:05:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.803 23:05:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.803 23:05:01 -- scripts/common.sh@364 -- # decimal 1 00:05:05.803 23:05:01 -- scripts/common.sh@352 -- # local d=1 00:05:05.803 23:05:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.803 23:05:01 -- scripts/common.sh@354 -- # echo 1 00:05:05.803 23:05:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.803 23:05:01 -- scripts/common.sh@365 -- # decimal 2 00:05:05.803 23:05:01 -- scripts/common.sh@352 -- # local d=2 00:05:05.803 23:05:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.803 23:05:01 -- scripts/common.sh@354 -- # echo 2 00:05:05.803 23:05:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.803 23:05:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.803 23:05:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.803 23:05:01 -- scripts/common.sh@367 -- # return 0 00:05:05.803 23:05:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:05.803 ' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:05.803 ' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:05.803 ' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.803 --rc genhtml_branch_coverage=1 00:05:05.803 --rc genhtml_function_coverage=1 00:05:05.803 --rc genhtml_legend=1 00:05:05.803 --rc geninfo_all_blocks=1 00:05:05.803 --rc geninfo_unexecuted_blocks=1 00:05:05.803 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:05.803 ' 00:05:05.803 23:05:01 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.803 23:05:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.803 23:05:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.803 23:05:01 -- common/autotest_common.sh@10 -- # set +x 00:05:05.803 ************************************ 00:05:05.803 START TEST env_memory 00:05:05.803 ************************************ 00:05:05.803 23:05:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.803 00:05:05.803 00:05:05.803 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.803 http://cunit.sourceforge.net/ 00:05:05.803 00:05:05.803 00:05:05.803 Suite: memory 00:05:05.804 Test: alloc and free memory map ...[2024-11-17 23:05:02.031597] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.804 passed 00:05:05.804 Test: mem map translation ...[2024-11-17 23:05:02.044482] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.804 [2024-11-17 23:05:02.044501] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.804 [2024-11-17 23:05:02.044531] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.804 [2024-11-17 23:05:02.044543] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.804 passed 00:05:05.804 Test: mem map registration ...[2024-11-17 23:05:02.064006] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.804 [2024-11-17 23:05:02.064030] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.804 passed 00:05:05.804 Test: mem map adjacent registrations ...passed 00:05:05.804 00:05:05.804 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.804 suites 1 1 n/a 0 0 00:05:05.804 tests 4 4 4 0 0 00:05:05.804 asserts 152 152 152 0 n/a 00:05:05.804 00:05:05.804 Elapsed time = 0.081 seconds 00:05:05.804 00:05:05.804 real 0m0.094s 00:05:05.804 user 0m0.082s 00:05:05.804 sys 0m0.012s 00:05:05.804 23:05:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.804 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:05.804 ************************************ 00:05:05.804 END TEST env_memory 00:05:05.804 ************************************ 00:05:05.804 23:05:02 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.804 23:05:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.804 23:05:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.804 23:05:02 -- common/autotest_common.sh@10 -- # set +x 00:05:05.804 ************************************ 00:05:05.804 START TEST env_vtophys 00:05:05.804 ************************************ 00:05:05.804 23:05:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.804 EAL: lib.eal log level changed from notice to debug 00:05:05.804 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.804 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.804 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.804 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.804 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.804 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.804 EAL: Detected lcore 6 as core 6 on socket 0 00:05:05.804 EAL: Detected lcore 7 as core 8 on socket 0 00:05:05.804 EAL: Detected lcore 8 as core 9 on socket 0 00:05:05.804 EAL: Detected lcore 9 as core 10 on socket 0 00:05:05.804 EAL: Detected lcore 10 as core 11 on socket 0 00:05:05.804 EAL: Detected lcore 11 as core 12 on socket 0 00:05:05.804 EAL: Detected lcore 12 as core 13 on socket 0 00:05:05.804 EAL: Detected lcore 13 as core 14 on socket 0 00:05:05.804 EAL: Detected lcore 14 as core 16 on socket 0 00:05:05.804 EAL: Detected lcore 15 as core 17 on socket 0 00:05:05.804 EAL: Detected lcore 16 as core 18 on socket 0 00:05:05.804 EAL: Detected lcore 17 as core 19 on socket 0 00:05:05.804 EAL: Detected lcore 18 as core 20 on socket 0 00:05:05.804 EAL: Detected lcore 19 as core 21 on socket 0 00:05:05.804 EAL: Detected lcore 20 as core 22 on socket 0 00:05:05.804 EAL: Detected lcore 21 as core 24 on socket 0 00:05:05.804 EAL: Detected lcore 22 as core 25 on socket 0 00:05:05.804 EAL: Detected lcore 23 as core 26 on socket 0 00:05:05.804 EAL: Detected lcore 24 as core 27 on socket 0 00:05:05.804 EAL: Detected lcore 25 as core 28 on socket 0 00:05:05.804 EAL: Detected lcore 26 as core 29 on socket 0 00:05:05.804 EAL: Detected lcore 27 as core 30 on socket 0 00:05:05.804 EAL: Detected lcore 28 as core 0 on socket 1 00:05:05.804 EAL: Detected lcore 29 as core 1 on socket 1 00:05:05.804 EAL: Detected lcore 30 as core 2 on socket 1 00:05:05.804 EAL: Detected lcore 31 as core 3 on socket 1 00:05:05.804 EAL: Detected lcore 32 as core 4 on socket 1 00:05:05.804 EAL: Detected lcore 33 as core 5 on socket 1 00:05:05.804 EAL: Detected lcore 34 as core 6 on socket 1 00:05:05.804 EAL: Detected lcore 35 as core 8 on socket 1 00:05:05.804 EAL: Detected lcore 36 as core 9 on socket 1 00:05:05.804 EAL: Detected lcore 37 as core 10 on socket 1 00:05:05.804 EAL: Detected lcore 38 as core 11 on socket 1 00:05:05.804 EAL: Detected lcore 39 as core 12 on socket 1 00:05:05.804 EAL: Detected lcore 40 as core 13 on socket 1 00:05:05.804 EAL: Detected lcore 41 as core 14 on socket 1 00:05:05.804 EAL: Detected lcore 42 as core 16 on socket 1 00:05:05.804 EAL: Detected lcore 43 as core 17 on socket 1 00:05:05.804 EAL: Detected lcore 44 as core 18 on socket 1 00:05:05.804 EAL: Detected lcore 45 as core 19 on socket 1 00:05:05.804 EAL: Detected lcore 46 as core 20 on socket 1 00:05:05.804 EAL: Detected lcore 47 as core 21 on socket 1 00:05:05.804 EAL: Detected lcore 48 as core 22 on socket 1 00:05:05.804 EAL: Detected lcore 49 as core 24 on socket 1 00:05:05.804 EAL: Detected lcore 50 as core 25 on socket 1 00:05:05.804 EAL: Detected lcore 51 as core 26 on socket 1 00:05:05.804 EAL: Detected lcore 52 as core 27 on socket 1 00:05:05.804 EAL: Detected lcore 53 as core 28 on socket 1 00:05:05.804 EAL: Detected lcore 54 as core 29 on socket 1 00:05:05.804 EAL: Detected lcore 55 as core 30 on socket 1 00:05:05.804 EAL: Detected lcore 56 as core 0 on socket 0 00:05:05.804 EAL: Detected lcore 57 as core 1 on socket 0 00:05:05.804 EAL: Detected lcore 58 as core 2 on socket 0 00:05:05.804 EAL: Detected lcore 59 as core 3 on socket 0 00:05:05.804 EAL: Detected lcore 60 as core 4 on socket 0 00:05:05.804 EAL: Detected lcore 61 as core 5 on socket 0 00:05:05.804 EAL: Detected lcore 62 as core 6 on socket 0 00:05:05.804 EAL: Detected lcore 63 as core 8 on socket 0 00:05:05.804 EAL: Detected lcore 64 as core 9 on socket 0 00:05:05.804 EAL: Detected lcore 65 as core 10 on socket 0 00:05:05.804 EAL: Detected lcore 66 as core 11 on socket 0 00:05:05.804 EAL: Detected lcore 67 as core 12 on socket 0 00:05:05.804 EAL: Detected lcore 68 as core 13 on socket 0 00:05:05.804 EAL: Detected lcore 69 as core 14 on socket 0 00:05:05.804 EAL: Detected lcore 70 as core 16 on socket 0 00:05:05.804 EAL: Detected lcore 71 as core 17 on socket 0 00:05:05.804 EAL: Detected lcore 72 as core 18 on socket 0 00:05:05.804 EAL: Detected lcore 73 as core 19 on socket 0 00:05:05.804 EAL: Detected lcore 74 as core 20 on socket 0 00:05:05.804 EAL: Detected lcore 75 as core 21 on socket 0 00:05:05.804 EAL: Detected lcore 76 as core 22 on socket 0 00:05:05.804 EAL: Detected lcore 77 as core 24 on socket 0 00:05:05.804 EAL: Detected lcore 78 as core 25 on socket 0 00:05:05.804 EAL: Detected lcore 79 as core 26 on socket 0 00:05:05.804 EAL: Detected lcore 80 as core 27 on socket 0 00:05:05.804 EAL: Detected lcore 81 as core 28 on socket 0 00:05:05.804 EAL: Detected lcore 82 as core 29 on socket 0 00:05:05.804 EAL: Detected lcore 83 as core 30 on socket 0 00:05:05.804 EAL: Detected lcore 84 as core 0 on socket 1 00:05:05.804 EAL: Detected lcore 85 as core 1 on socket 1 00:05:05.804 EAL: Detected lcore 86 as core 2 on socket 1 00:05:05.804 EAL: Detected lcore 87 as core 3 on socket 1 00:05:05.804 EAL: Detected lcore 88 as core 4 on socket 1 00:05:05.804 EAL: Detected lcore 89 as core 5 on socket 1 00:05:05.804 EAL: Detected lcore 90 as core 6 on socket 1 00:05:05.804 EAL: Detected lcore 91 as core 8 on socket 1 00:05:05.804 EAL: Detected lcore 92 as core 9 on socket 1 00:05:05.804 EAL: Detected lcore 93 as core 10 on socket 1 00:05:05.804 EAL: Detected lcore 94 as core 11 on socket 1 00:05:05.804 EAL: Detected lcore 95 as core 12 on socket 1 00:05:05.804 EAL: Detected lcore 96 as core 13 on socket 1 00:05:05.804 EAL: Detected lcore 97 as core 14 on socket 1 00:05:05.804 EAL: Detected lcore 98 as core 16 on socket 1 00:05:05.804 EAL: Detected lcore 99 as core 17 on socket 1 00:05:05.804 EAL: Detected lcore 100 as core 18 on socket 1 00:05:05.804 EAL: Detected lcore 101 as core 19 on socket 1 00:05:05.804 EAL: Detected lcore 102 as core 20 on socket 1 00:05:05.804 EAL: Detected lcore 103 as core 21 on socket 1 00:05:05.804 EAL: Detected lcore 104 as core 22 on socket 1 00:05:05.804 EAL: Detected lcore 105 as core 24 on socket 1 00:05:05.804 EAL: Detected lcore 106 as core 25 on socket 1 00:05:05.804 EAL: Detected lcore 107 as core 26 on socket 1 00:05:05.804 EAL: Detected lcore 108 as core 27 on socket 1 00:05:05.804 EAL: Detected lcore 109 as core 28 on socket 1 00:05:05.804 EAL: Detected lcore 110 as core 29 on socket 1 00:05:05.804 EAL: Detected lcore 111 as core 30 on socket 1 00:05:05.804 EAL: Maximum logical cores by configuration: 128 00:05:05.804 EAL: Detected CPU lcores: 112 00:05:05.804 EAL: Detected NUMA nodes: 2 00:05:05.804 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:05.804 EAL: Checking presence of .so 'librte_eal.so.24' 00:05:05.804 EAL: Checking presence of .so 'librte_eal.so' 00:05:05.804 EAL: Detected static linkage of DPDK 00:05:05.804 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.804 EAL: Bus pci wants IOVA as 'DC' 00:05:05.804 EAL: Buses did not request a specific IOVA mode. 00:05:05.804 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:05.804 EAL: Selected IOVA mode 'VA' 00:05:05.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.804 EAL: Probing VFIO support... 00:05:05.804 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.804 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.804 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.804 EAL: VFIO support initialized 00:05:05.804 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.804 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.804 EAL: Setting up physically contiguous memory... 00:05:05.804 EAL: Setting maximum number of open files to 524288 00:05:05.804 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.805 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.805 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.805 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.805 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.805 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.805 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.805 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.805 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.805 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.805 EAL: Hugepages will be freed exactly as allocated. 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: TSC frequency is ~2500000 KHz 00:05:05.805 EAL: Main lcore 0 is ready (tid=7f563c9a8a00;cpuset=[0]) 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 0 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.805 00:05:05.805 00:05:05.805 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.805 http://cunit.sourceforge.net/ 00:05:05.805 00:05:05.805 00:05:05.805 Suite: components_suite 00:05:05.805 Test: vtophys_malloc_test ...passed 00:05:05.805 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.805 EAL: Trying to obtain current memory policy. 00:05:05.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.805 EAL: Restoring previous memory policy: 4 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.805 EAL: request: mp_malloc_sync 00:05:05.805 EAL: No shared files mode enabled, IPC is disabled 00:05:05.805 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.065 EAL: request: mp_malloc_sync 00:05:06.065 EAL: No shared files mode enabled, IPC is disabled 00:05:06.065 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.065 EAL: Trying to obtain current memory policy. 00:05:06.065 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.065 EAL: Restoring previous memory policy: 4 00:05:06.065 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.065 EAL: request: mp_malloc_sync 00:05:06.065 EAL: No shared files mode enabled, IPC is disabled 00:05:06.065 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.065 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.324 EAL: request: mp_malloc_sync 00:05:06.324 EAL: No shared files mode enabled, IPC is disabled 00:05:06.324 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.324 EAL: Trying to obtain current memory policy. 00:05:06.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.324 EAL: Restoring previous memory policy: 4 00:05:06.325 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.325 EAL: request: mp_malloc_sync 00:05:06.325 EAL: No shared files mode enabled, IPC is disabled 00:05:06.325 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.583 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.842 EAL: request: mp_malloc_sync 00:05:06.842 EAL: No shared files mode enabled, IPC is disabled 00:05:06.842 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.842 passed 00:05:06.842 00:05:06.842 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.842 suites 1 1 n/a 0 0 00:05:06.842 tests 2 2 2 0 0 00:05:06.842 asserts 497 497 497 0 n/a 00:05:06.842 00:05:06.842 Elapsed time = 0.959 seconds 00:05:06.842 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.842 EAL: request: mp_malloc_sync 00:05:06.842 EAL: No shared files mode enabled, IPC is disabled 00:05:06.842 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.842 EAL: No shared files mode enabled, IPC is disabled 00:05:06.842 EAL: No shared files mode enabled, IPC is disabled 00:05:06.842 EAL: No shared files mode enabled, IPC is disabled 00:05:06.842 00:05:06.842 real 0m1.077s 00:05:06.842 user 0m0.618s 00:05:06.842 sys 0m0.435s 00:05:06.842 23:05:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.842 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:06.842 ************************************ 00:05:06.842 END TEST env_vtophys 00:05:06.842 ************************************ 00:05:06.842 23:05:03 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.842 23:05:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.842 23:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.842 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:06.842 ************************************ 00:05:06.842 START TEST env_pci 00:05:06.842 ************************************ 00:05:06.842 23:05:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.842 00:05:06.842 00:05:06.842 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.842 http://cunit.sourceforge.net/ 00:05:06.842 00:05:06.842 00:05:06.842 Suite: pci 00:05:06.843 Test: pci_hook ...[2024-11-17 23:05:03.278070] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1041:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1272011 has claimed it 00:05:06.843 EAL: Cannot find device (10000:00:01.0) 00:05:06.843 EAL: Failed to attach device on primary process 00:05:06.843 passed 00:05:06.843 00:05:06.843 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.843 suites 1 1 n/a 0 0 00:05:06.843 tests 1 1 1 0 0 00:05:06.843 asserts 25 25 25 0 n/a 00:05:06.843 00:05:06.843 Elapsed time = 0.034 seconds 00:05:06.843 00:05:06.843 real 0m0.053s 00:05:06.843 user 0m0.011s 00:05:06.843 sys 0m0.042s 00:05:06.843 23:05:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.843 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:06.843 ************************************ 00:05:06.843 END TEST env_pci 00:05:06.843 ************************************ 00:05:06.843 23:05:03 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.843 23:05:03 -- env/env.sh@15 -- # uname 00:05:06.843 23:05:03 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.843 23:05:03 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.843 23:05:03 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.843 23:05:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:06.843 23:05:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.843 23:05:03 -- common/autotest_common.sh@10 -- # set +x 00:05:06.843 ************************************ 00:05:06.843 START TEST env_dpdk_post_init 00:05:06.843 ************************************ 00:05:06.843 23:05:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.843 EAL: Detected CPU lcores: 112 00:05:06.843 EAL: Detected NUMA nodes: 2 00:05:06.843 EAL: Detected static linkage of DPDK 00:05:06.843 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.843 EAL: Selected IOVA mode 'VA' 00:05:06.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.843 EAL: VFIO support initialized 00:05:06.843 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:07.102 EAL: Using IOMMU type 1 (Type 1) 00:05:07.670 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:11.864 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:11.864 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001000000 00:05:11.864 Starting DPDK initialization... 00:05:11.864 Starting SPDK post initialization... 00:05:11.864 SPDK NVMe probe 00:05:11.864 Attaching to 0000:d8:00.0 00:05:11.864 Attached to 0000:d8:00.0 00:05:11.864 Cleaning up... 00:05:11.864 00:05:11.864 real 0m4.733s 00:05:11.864 user 0m3.610s 00:05:11.864 sys 0m0.364s 00:05:11.864 23:05:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.864 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.864 ************************************ 00:05:11.864 END TEST env_dpdk_post_init 00:05:11.864 ************************************ 00:05:11.864 23:05:08 -- env/env.sh@26 -- # uname 00:05:11.864 23:05:08 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.864 23:05:08 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.864 23:05:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.864 23:05:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.864 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.864 ************************************ 00:05:11.864 START TEST env_mem_callbacks 00:05:11.864 ************************************ 00:05:11.864 23:05:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.864 EAL: Detected CPU lcores: 112 00:05:11.864 EAL: Detected NUMA nodes: 2 00:05:11.864 EAL: Detected static linkage of DPDK 00:05:11.864 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.864 EAL: Selected IOVA mode 'VA' 00:05:11.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.864 EAL: VFIO support initialized 00:05:11.864 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.864 00:05:11.864 00:05:11.864 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.864 http://cunit.sourceforge.net/ 00:05:11.864 00:05:11.864 00:05:11.864 Suite: memory 00:05:11.864 Test: test ... 00:05:11.864 register 0x200000200000 2097152 00:05:11.864 malloc 3145728 00:05:11.864 register 0x200000400000 4194304 00:05:11.864 buf 0x200000500000 len 3145728 PASSED 00:05:11.864 malloc 64 00:05:11.864 buf 0x2000004fff40 len 64 PASSED 00:05:11.864 malloc 4194304 00:05:11.864 register 0x200000800000 6291456 00:05:11.864 buf 0x200000a00000 len 4194304 PASSED 00:05:11.864 free 0x200000500000 3145728 00:05:11.864 free 0x2000004fff40 64 00:05:11.864 unregister 0x200000400000 4194304 PASSED 00:05:11.864 free 0x200000a00000 4194304 00:05:11.864 unregister 0x200000800000 6291456 PASSED 00:05:11.864 malloc 8388608 00:05:11.864 register 0x200000400000 10485760 00:05:11.864 buf 0x200000600000 len 8388608 PASSED 00:05:11.864 free 0x200000600000 8388608 00:05:11.864 unregister 0x200000400000 10485760 PASSED 00:05:11.864 passed 00:05:11.864 00:05:11.864 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.864 suites 1 1 n/a 0 0 00:05:11.864 tests 1 1 1 0 0 00:05:11.864 asserts 15 15 15 0 n/a 00:05:11.864 00:05:11.864 Elapsed time = 0.005 seconds 00:05:11.864 00:05:11.864 real 0m0.066s 00:05:11.864 user 0m0.018s 00:05:11.864 sys 0m0.047s 00:05:11.864 23:05:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.864 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.864 ************************************ 00:05:11.864 END TEST env_mem_callbacks 00:05:11.864 ************************************ 00:05:11.864 00:05:11.864 real 0m6.436s 00:05:11.864 user 0m4.516s 00:05:11.864 sys 0m1.186s 00:05:11.864 23:05:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.864 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.864 ************************************ 00:05:11.864 END TEST env 00:05:11.864 ************************************ 00:05:11.864 23:05:08 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.864 23:05:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.864 23:05:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.864 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:11.864 ************************************ 00:05:11.864 START TEST rpc 00:05:11.864 ************************************ 00:05:11.864 23:05:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.864 * Looking for test storage... 00:05:11.864 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:11.864 23:05:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.864 23:05:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.864 23:05:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.864 23:05:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.864 23:05:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.864 23:05:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.864 23:05:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.864 23:05:08 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.864 23:05:08 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.864 23:05:08 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.864 23:05:08 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.864 23:05:08 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.864 23:05:08 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.864 23:05:08 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.864 23:05:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.864 23:05:08 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.864 23:05:08 -- scripts/common.sh@344 -- # : 1 00:05:11.864 23:05:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.864 23:05:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.124 23:05:08 -- scripts/common.sh@364 -- # decimal 1 00:05:12.124 23:05:08 -- scripts/common.sh@352 -- # local d=1 00:05:12.124 23:05:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.124 23:05:08 -- scripts/common.sh@354 -- # echo 1 00:05:12.124 23:05:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:12.124 23:05:08 -- scripts/common.sh@365 -- # decimal 2 00:05:12.124 23:05:08 -- scripts/common.sh@352 -- # local d=2 00:05:12.124 23:05:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.124 23:05:08 -- scripts/common.sh@354 -- # echo 2 00:05:12.124 23:05:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:12.124 23:05:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:12.124 23:05:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:12.124 23:05:08 -- scripts/common.sh@367 -- # return 0 00:05:12.124 23:05:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.124 23:05:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:12.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.124 --rc genhtml_branch_coverage=1 00:05:12.124 --rc genhtml_function_coverage=1 00:05:12.124 --rc genhtml_legend=1 00:05:12.124 --rc geninfo_all_blocks=1 00:05:12.124 --rc geninfo_unexecuted_blocks=1 00:05:12.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.124 ' 00:05:12.124 23:05:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:12.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.124 --rc genhtml_branch_coverage=1 00:05:12.124 --rc genhtml_function_coverage=1 00:05:12.124 --rc genhtml_legend=1 00:05:12.124 --rc geninfo_all_blocks=1 00:05:12.124 --rc geninfo_unexecuted_blocks=1 00:05:12.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.124 ' 00:05:12.124 23:05:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:12.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.124 --rc genhtml_branch_coverage=1 00:05:12.124 --rc genhtml_function_coverage=1 00:05:12.124 --rc genhtml_legend=1 00:05:12.124 --rc geninfo_all_blocks=1 00:05:12.124 --rc geninfo_unexecuted_blocks=1 00:05:12.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.124 ' 00:05:12.124 23:05:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:12.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.124 --rc genhtml_branch_coverage=1 00:05:12.124 --rc genhtml_function_coverage=1 00:05:12.124 --rc genhtml_legend=1 00:05:12.124 --rc geninfo_all_blocks=1 00:05:12.124 --rc geninfo_unexecuted_blocks=1 00:05:12.124 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:12.124 ' 00:05:12.124 23:05:08 -- rpc/rpc.sh@65 -- # spdk_pid=1272973 00:05:12.124 23:05:08 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.124 23:05:08 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:12.124 23:05:08 -- rpc/rpc.sh@67 -- # waitforlisten 1272973 00:05:12.124 23:05:08 -- common/autotest_common.sh@829 -- # '[' -z 1272973 ']' 00:05:12.124 23:05:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.124 23:05:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.124 23:05:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.124 23:05:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.124 23:05:08 -- common/autotest_common.sh@10 -- # set +x 00:05:12.124 [2024-11-17 23:05:08.516341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:12.124 [2024-11-17 23:05:08.516413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1272973 ] 00:05:12.124 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.124 [2024-11-17 23:05:08.584847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.124 [2024-11-17 23:05:08.653270] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.124 [2024-11-17 23:05:08.653372] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.124 [2024-11-17 23:05:08.653382] app.c: 492:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1272973' to capture a snapshot of events at runtime. 00:05:12.124 [2024-11-17 23:05:08.653391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1272973 for offline analysis/debug. 00:05:12.124 [2024-11-17 23:05:08.653408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.061 23:05:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.061 23:05:09 -- common/autotest_common.sh@862 -- # return 0 00:05:13.061 23:05:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:13.061 23:05:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:05:13.061 23:05:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:13.061 23:05:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:13.061 23:05:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.061 23:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.061 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 ************************************ 00:05:13.061 START TEST rpc_integrity 00:05:13.061 ************************************ 00:05:13.061 23:05:09 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:13.061 23:05:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.061 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.061 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.061 23:05:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.061 23:05:09 -- rpc/rpc.sh@13 -- # jq length 00:05:13.061 23:05:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.061 23:05:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.061 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.061 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.061 23:05:09 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:13.061 23:05:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.061 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.061 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.061 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.061 23:05:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.061 { 00:05:13.061 "name": "Malloc0", 00:05:13.061 "aliases": [ 00:05:13.061 "04f8c775-1cca-4875-8d48-ec8e344c2a45" 00:05:13.061 ], 00:05:13.061 "product_name": "Malloc disk", 00:05:13.061 "block_size": 512, 00:05:13.061 "num_blocks": 16384, 00:05:13.061 "uuid": "04f8c775-1cca-4875-8d48-ec8e344c2a45", 00:05:13.061 "assigned_rate_limits": { 00:05:13.061 "rw_ios_per_sec": 0, 00:05:13.061 "rw_mbytes_per_sec": 0, 00:05:13.061 "r_mbytes_per_sec": 0, 00:05:13.061 "w_mbytes_per_sec": 0 00:05:13.061 }, 00:05:13.061 "claimed": false, 00:05:13.061 "zoned": false, 00:05:13.061 "supported_io_types": { 00:05:13.061 "read": true, 00:05:13.061 "write": true, 00:05:13.061 "unmap": true, 00:05:13.061 "write_zeroes": true, 00:05:13.061 "flush": true, 00:05:13.061 "reset": true, 00:05:13.061 "compare": false, 00:05:13.061 "compare_and_write": false, 00:05:13.061 "abort": true, 00:05:13.061 "nvme_admin": false, 00:05:13.061 "nvme_io": false 00:05:13.061 }, 00:05:13.061 "memory_domains": [ 00:05:13.061 { 00:05:13.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.061 "dma_device_type": 2 00:05:13.061 } 00:05:13.061 ], 00:05:13.061 "driver_specific": {} 00:05:13.061 } 00:05:13.061 ]' 00:05:13.061 23:05:09 -- rpc/rpc.sh@17 -- # jq length 00:05:13.061 23:05:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.061 23:05:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.061 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.061 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 [2024-11-17 23:05:09.475380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.062 [2024-11-17 23:05:09.475415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.062 [2024-11-17 23:05:09.475437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4c4c030 00:05:13.062 [2024-11-17 23:05:09.475447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.062 [2024-11-17 23:05:09.476284] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.062 [2024-11-17 23:05:09.476307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.062 Passthru0 00:05:13.062 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.062 23:05:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.062 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.062 23:05:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.062 { 00:05:13.062 "name": "Malloc0", 00:05:13.062 "aliases": [ 00:05:13.062 "04f8c775-1cca-4875-8d48-ec8e344c2a45" 00:05:13.062 ], 00:05:13.062 "product_name": "Malloc disk", 00:05:13.062 "block_size": 512, 00:05:13.062 "num_blocks": 16384, 00:05:13.062 "uuid": "04f8c775-1cca-4875-8d48-ec8e344c2a45", 00:05:13.062 "assigned_rate_limits": { 00:05:13.062 "rw_ios_per_sec": 0, 00:05:13.062 "rw_mbytes_per_sec": 0, 00:05:13.062 "r_mbytes_per_sec": 0, 00:05:13.062 "w_mbytes_per_sec": 0 00:05:13.062 }, 00:05:13.062 "claimed": true, 00:05:13.062 "claim_type": "exclusive_write", 00:05:13.062 "zoned": false, 00:05:13.062 "supported_io_types": { 00:05:13.062 "read": true, 00:05:13.062 "write": true, 00:05:13.062 "unmap": true, 00:05:13.062 "write_zeroes": true, 00:05:13.062 "flush": true, 00:05:13.062 "reset": true, 00:05:13.062 "compare": false, 00:05:13.062 "compare_and_write": false, 00:05:13.062 "abort": true, 00:05:13.062 "nvme_admin": false, 00:05:13.062 "nvme_io": false 00:05:13.062 }, 00:05:13.062 "memory_domains": [ 00:05:13.062 { 00:05:13.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.062 "dma_device_type": 2 00:05:13.062 } 00:05:13.062 ], 00:05:13.062 "driver_specific": {} 00:05:13.062 }, 00:05:13.062 { 00:05:13.062 "name": "Passthru0", 00:05:13.062 "aliases": [ 00:05:13.062 "f2f8ba9f-5efd-53fa-aca3-ca812d362758" 00:05:13.062 ], 00:05:13.062 "product_name": "passthru", 00:05:13.062 "block_size": 512, 00:05:13.062 "num_blocks": 16384, 00:05:13.062 "uuid": "f2f8ba9f-5efd-53fa-aca3-ca812d362758", 00:05:13.062 "assigned_rate_limits": { 00:05:13.062 "rw_ios_per_sec": 0, 00:05:13.062 "rw_mbytes_per_sec": 0, 00:05:13.062 "r_mbytes_per_sec": 0, 00:05:13.062 "w_mbytes_per_sec": 0 00:05:13.062 }, 00:05:13.062 "claimed": false, 00:05:13.062 "zoned": false, 00:05:13.062 "supported_io_types": { 00:05:13.062 "read": true, 00:05:13.062 "write": true, 00:05:13.062 "unmap": true, 00:05:13.062 "write_zeroes": true, 00:05:13.062 "flush": true, 00:05:13.062 "reset": true, 00:05:13.062 "compare": false, 00:05:13.062 "compare_and_write": false, 00:05:13.062 "abort": true, 00:05:13.062 "nvme_admin": false, 00:05:13.062 "nvme_io": false 00:05:13.062 }, 00:05:13.062 "memory_domains": [ 00:05:13.062 { 00:05:13.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.062 "dma_device_type": 2 00:05:13.062 } 00:05:13.062 ], 00:05:13.062 "driver_specific": { 00:05:13.062 "passthru": { 00:05:13.062 "name": "Passthru0", 00:05:13.062 "base_bdev_name": "Malloc0" 00:05:13.062 } 00:05:13.062 } 00:05:13.062 } 00:05:13.062 ]' 00:05:13.062 23:05:09 -- rpc/rpc.sh@21 -- # jq length 00:05:13.062 23:05:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.062 23:05:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.062 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.062 23:05:09 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.062 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.062 23:05:09 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.062 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.062 23:05:09 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.062 23:05:09 -- rpc/rpc.sh@26 -- # jq length 00:05:13.062 23:05:09 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.062 00:05:13.062 real 0m0.253s 00:05:13.062 user 0m0.165s 00:05:13.062 sys 0m0.037s 00:05:13.062 23:05:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 ************************************ 00:05:13.062 END TEST rpc_integrity 00:05:13.062 ************************************ 00:05:13.062 23:05:09 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.062 23:05:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.062 23:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.062 ************************************ 00:05:13.062 START TEST rpc_plugins 00:05:13.062 ************************************ 00:05:13.062 23:05:09 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:05:13.062 23:05:09 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.062 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.062 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.322 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.322 23:05:09 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.322 23:05:09 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.322 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.322 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.322 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.322 23:05:09 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.322 { 00:05:13.322 "name": "Malloc1", 00:05:13.322 "aliases": [ 00:05:13.322 "daf3d5ea-9c88-4e51-a4e5-a5bdc0be8332" 00:05:13.322 ], 00:05:13.322 "product_name": "Malloc disk", 00:05:13.322 "block_size": 4096, 00:05:13.322 "num_blocks": 256, 00:05:13.322 "uuid": "daf3d5ea-9c88-4e51-a4e5-a5bdc0be8332", 00:05:13.322 "assigned_rate_limits": { 00:05:13.322 "rw_ios_per_sec": 0, 00:05:13.322 "rw_mbytes_per_sec": 0, 00:05:13.322 "r_mbytes_per_sec": 0, 00:05:13.322 "w_mbytes_per_sec": 0 00:05:13.322 }, 00:05:13.322 "claimed": false, 00:05:13.322 "zoned": false, 00:05:13.322 "supported_io_types": { 00:05:13.322 "read": true, 00:05:13.322 "write": true, 00:05:13.322 "unmap": true, 00:05:13.322 "write_zeroes": true, 00:05:13.322 "flush": true, 00:05:13.322 "reset": true, 00:05:13.322 "compare": false, 00:05:13.322 "compare_and_write": false, 00:05:13.322 "abort": true, 00:05:13.322 "nvme_admin": false, 00:05:13.322 "nvme_io": false 00:05:13.322 }, 00:05:13.322 "memory_domains": [ 00:05:13.322 { 00:05:13.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.322 "dma_device_type": 2 00:05:13.322 } 00:05:13.322 ], 00:05:13.322 "driver_specific": {} 00:05:13.322 } 00:05:13.322 ]' 00:05:13.322 23:05:09 -- rpc/rpc.sh@32 -- # jq length 00:05:13.322 23:05:09 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.322 23:05:09 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.322 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.322 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.322 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.322 23:05:09 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.322 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.322 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.322 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.322 23:05:09 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.322 23:05:09 -- rpc/rpc.sh@36 -- # jq length 00:05:13.322 23:05:09 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.322 00:05:13.322 real 0m0.136s 00:05:13.322 user 0m0.082s 00:05:13.322 sys 0m0.020s 00:05:13.322 23:05:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.322 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.322 ************************************ 00:05:13.322 END TEST rpc_plugins 00:05:13.322 ************************************ 00:05:13.322 23:05:09 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.322 23:05:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.322 23:05:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.322 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.322 ************************************ 00:05:13.322 START TEST rpc_trace_cmd_test 00:05:13.322 ************************************ 00:05:13.323 23:05:09 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:05:13.323 23:05:09 -- rpc/rpc.sh@40 -- # local info 00:05:13.323 23:05:09 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.323 23:05:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.323 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:05:13.323 23:05:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.323 23:05:09 -- rpc/rpc.sh@42 -- # info='{ 00:05:13.323 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1272973", 00:05:13.323 "tpoint_group_mask": "0x8", 00:05:13.323 "iscsi_conn": { 00:05:13.323 "mask": "0x2", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "scsi": { 00:05:13.323 "mask": "0x4", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "bdev": { 00:05:13.323 "mask": "0x8", 00:05:13.323 "tpoint_mask": "0xffffffffffffffff" 00:05:13.323 }, 00:05:13.323 "nvmf_rdma": { 00:05:13.323 "mask": "0x10", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "nvmf_tcp": { 00:05:13.323 "mask": "0x20", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "ftl": { 00:05:13.323 "mask": "0x40", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "blobfs": { 00:05:13.323 "mask": "0x80", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "dsa": { 00:05:13.323 "mask": "0x200", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "thread": { 00:05:13.323 "mask": "0x400", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "nvme_pcie": { 00:05:13.323 "mask": "0x800", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "iaa": { 00:05:13.323 "mask": "0x1000", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "nvme_tcp": { 00:05:13.323 "mask": "0x2000", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 }, 00:05:13.323 "bdev_nvme": { 00:05:13.323 "mask": "0x4000", 00:05:13.323 "tpoint_mask": "0x0" 00:05:13.323 } 00:05:13.323 }' 00:05:13.323 23:05:09 -- rpc/rpc.sh@43 -- # jq length 00:05:13.323 23:05:09 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:13.323 23:05:09 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.582 23:05:09 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.582 23:05:09 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.582 23:05:09 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.582 23:05:09 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.582 23:05:10 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.582 23:05:10 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.582 23:05:10 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.582 00:05:13.582 real 0m0.212s 00:05:13.582 user 0m0.168s 00:05:13.582 sys 0m0.035s 00:05:13.582 23:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.582 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.582 ************************************ 00:05:13.582 END TEST rpc_trace_cmd_test 00:05:13.582 ************************************ 00:05:13.582 23:05:10 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.582 23:05:10 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.582 23:05:10 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.582 23:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.582 23:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.582 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.582 ************************************ 00:05:13.582 START TEST rpc_daemon_integrity 00:05:13.582 ************************************ 00:05:13.582 23:05:10 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:05:13.582 23:05:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.582 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.582 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.582 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.582 23:05:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.582 23:05:10 -- rpc/rpc.sh@13 -- # jq length 00:05:13.582 23:05:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.582 23:05:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.582 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.582 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.582 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.582 23:05:10 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.582 23:05:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.582 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.582 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.582 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.582 23:05:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.582 { 00:05:13.582 "name": "Malloc2", 00:05:13.582 "aliases": [ 00:05:13.582 "ad183a25-14c8-4965-ae40-17e9885e52a4" 00:05:13.582 ], 00:05:13.582 "product_name": "Malloc disk", 00:05:13.582 "block_size": 512, 00:05:13.582 "num_blocks": 16384, 00:05:13.582 "uuid": "ad183a25-14c8-4965-ae40-17e9885e52a4", 00:05:13.582 "assigned_rate_limits": { 00:05:13.582 "rw_ios_per_sec": 0, 00:05:13.582 "rw_mbytes_per_sec": 0, 00:05:13.582 "r_mbytes_per_sec": 0, 00:05:13.582 "w_mbytes_per_sec": 0 00:05:13.582 }, 00:05:13.582 "claimed": false, 00:05:13.582 "zoned": false, 00:05:13.582 "supported_io_types": { 00:05:13.582 "read": true, 00:05:13.582 "write": true, 00:05:13.582 "unmap": true, 00:05:13.582 "write_zeroes": true, 00:05:13.582 "flush": true, 00:05:13.582 "reset": true, 00:05:13.582 "compare": false, 00:05:13.582 "compare_and_write": false, 00:05:13.582 "abort": true, 00:05:13.582 "nvme_admin": false, 00:05:13.582 "nvme_io": false 00:05:13.582 }, 00:05:13.582 "memory_domains": [ 00:05:13.582 { 00:05:13.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.582 "dma_device_type": 2 00:05:13.582 } 00:05:13.582 ], 00:05:13.582 "driver_specific": {} 00:05:13.582 } 00:05:13.582 ]' 00:05:13.583 23:05:10 -- rpc/rpc.sh@17 -- # jq length 00:05:13.842 23:05:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.842 23:05:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.843 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.843 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 [2024-11-17 23:05:10.237381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.843 [2024-11-17 23:05:10.237414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.843 [2024-11-17 23:05:10.237429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x4dd5980 00:05:13.843 [2024-11-17 23:05:10.237439] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.843 [2024-11-17 23:05:10.238165] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.843 [2024-11-17 23:05:10.238185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.843 Passthru0 00:05:13.843 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.843 23:05:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.843 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.843 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.843 23:05:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.843 { 00:05:13.843 "name": "Malloc2", 00:05:13.843 "aliases": [ 00:05:13.843 "ad183a25-14c8-4965-ae40-17e9885e52a4" 00:05:13.843 ], 00:05:13.843 "product_name": "Malloc disk", 00:05:13.843 "block_size": 512, 00:05:13.843 "num_blocks": 16384, 00:05:13.843 "uuid": "ad183a25-14c8-4965-ae40-17e9885e52a4", 00:05:13.843 "assigned_rate_limits": { 00:05:13.843 "rw_ios_per_sec": 0, 00:05:13.843 "rw_mbytes_per_sec": 0, 00:05:13.843 "r_mbytes_per_sec": 0, 00:05:13.843 "w_mbytes_per_sec": 0 00:05:13.843 }, 00:05:13.843 "claimed": true, 00:05:13.843 "claim_type": "exclusive_write", 00:05:13.843 "zoned": false, 00:05:13.843 "supported_io_types": { 00:05:13.843 "read": true, 00:05:13.843 "write": true, 00:05:13.843 "unmap": true, 00:05:13.843 "write_zeroes": true, 00:05:13.843 "flush": true, 00:05:13.843 "reset": true, 00:05:13.843 "compare": false, 00:05:13.843 "compare_and_write": false, 00:05:13.843 "abort": true, 00:05:13.843 "nvme_admin": false, 00:05:13.843 "nvme_io": false 00:05:13.843 }, 00:05:13.843 "memory_domains": [ 00:05:13.843 { 00:05:13.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.843 "dma_device_type": 2 00:05:13.843 } 00:05:13.843 ], 00:05:13.843 "driver_specific": {} 00:05:13.843 }, 00:05:13.843 { 00:05:13.843 "name": "Passthru0", 00:05:13.843 "aliases": [ 00:05:13.843 "f5b42720-b599-5882-844d-b9dbde1519fb" 00:05:13.843 ], 00:05:13.843 "product_name": "passthru", 00:05:13.843 "block_size": 512, 00:05:13.843 "num_blocks": 16384, 00:05:13.843 "uuid": "f5b42720-b599-5882-844d-b9dbde1519fb", 00:05:13.843 "assigned_rate_limits": { 00:05:13.843 "rw_ios_per_sec": 0, 00:05:13.843 "rw_mbytes_per_sec": 0, 00:05:13.843 "r_mbytes_per_sec": 0, 00:05:13.843 "w_mbytes_per_sec": 0 00:05:13.843 }, 00:05:13.843 "claimed": false, 00:05:13.843 "zoned": false, 00:05:13.843 "supported_io_types": { 00:05:13.843 "read": true, 00:05:13.843 "write": true, 00:05:13.843 "unmap": true, 00:05:13.843 "write_zeroes": true, 00:05:13.843 "flush": true, 00:05:13.843 "reset": true, 00:05:13.843 "compare": false, 00:05:13.843 "compare_and_write": false, 00:05:13.843 "abort": true, 00:05:13.843 "nvme_admin": false, 00:05:13.843 "nvme_io": false 00:05:13.843 }, 00:05:13.843 "memory_domains": [ 00:05:13.843 { 00:05:13.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.843 "dma_device_type": 2 00:05:13.843 } 00:05:13.843 ], 00:05:13.843 "driver_specific": { 00:05:13.843 "passthru": { 00:05:13.843 "name": "Passthru0", 00:05:13.843 "base_bdev_name": "Malloc2" 00:05:13.843 } 00:05:13.843 } 00:05:13.843 } 00:05:13.843 ]' 00:05:13.843 23:05:10 -- rpc/rpc.sh@21 -- # jq length 00:05:13.843 23:05:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.843 23:05:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.843 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.843 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.843 23:05:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.843 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.843 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.843 23:05:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.843 23:05:10 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.843 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 23:05:10 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.843 23:05:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.843 23:05:10 -- rpc/rpc.sh@26 -- # jq length 00:05:13.843 23:05:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.843 00:05:13.843 real 0m0.278s 00:05:13.843 user 0m0.176s 00:05:13.843 sys 0m0.044s 00:05:13.843 23:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:13.843 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:13.843 ************************************ 00:05:13.843 END TEST rpc_daemon_integrity 00:05:13.843 ************************************ 00:05:13.843 23:05:10 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.843 23:05:10 -- rpc/rpc.sh@84 -- # killprocess 1272973 00:05:13.843 23:05:10 -- common/autotest_common.sh@936 -- # '[' -z 1272973 ']' 00:05:13.843 23:05:10 -- common/autotest_common.sh@940 -- # kill -0 1272973 00:05:13.843 23:05:10 -- common/autotest_common.sh@941 -- # uname 00:05:13.843 23:05:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.843 23:05:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1272973 00:05:14.102 23:05:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.102 23:05:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.102 23:05:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1272973' 00:05:14.102 killing process with pid 1272973 00:05:14.102 23:05:10 -- common/autotest_common.sh@955 -- # kill 1272973 00:05:14.102 23:05:10 -- common/autotest_common.sh@960 -- # wait 1272973 00:05:14.362 00:05:14.362 real 0m2.499s 00:05:14.362 user 0m3.109s 00:05:14.362 sys 0m0.754s 00:05:14.362 23:05:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.362 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:14.362 ************************************ 00:05:14.362 END TEST rpc 00:05:14.362 ************************************ 00:05:14.362 23:05:10 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.362 23:05:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.362 23:05:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.362 23:05:10 -- common/autotest_common.sh@10 -- # set +x 00:05:14.362 ************************************ 00:05:14.362 START TEST rpc_client 00:05:14.362 ************************************ 00:05:14.362 23:05:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:14.362 * Looking for test storage... 00:05:14.362 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:05:14.362 23:05:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.362 23:05:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.362 23:05:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.621 23:05:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.621 23:05:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.621 23:05:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.621 23:05:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.621 23:05:11 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.621 23:05:11 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.621 23:05:11 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.621 23:05:11 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.622 23:05:11 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.622 23:05:11 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.622 23:05:11 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.622 23:05:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.622 23:05:11 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.622 23:05:11 -- scripts/common.sh@344 -- # : 1 00:05:14.622 23:05:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.622 23:05:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.622 23:05:11 -- scripts/common.sh@364 -- # decimal 1 00:05:14.622 23:05:11 -- scripts/common.sh@352 -- # local d=1 00:05:14.622 23:05:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.622 23:05:11 -- scripts/common.sh@354 -- # echo 1 00:05:14.622 23:05:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.622 23:05:11 -- scripts/common.sh@365 -- # decimal 2 00:05:14.622 23:05:11 -- scripts/common.sh@352 -- # local d=2 00:05:14.622 23:05:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.622 23:05:11 -- scripts/common.sh@354 -- # echo 2 00:05:14.622 23:05:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.622 23:05:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.622 23:05:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.622 23:05:11 -- scripts/common.sh@367 -- # return 0 00:05:14.622 23:05:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.622 23:05:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.622 --rc genhtml_branch_coverage=1 00:05:14.622 --rc genhtml_function_coverage=1 00:05:14.622 --rc genhtml_legend=1 00:05:14.622 --rc geninfo_all_blocks=1 00:05:14.622 --rc geninfo_unexecuted_blocks=1 00:05:14.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.622 ' 00:05:14.622 23:05:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.622 --rc genhtml_branch_coverage=1 00:05:14.622 --rc genhtml_function_coverage=1 00:05:14.622 --rc genhtml_legend=1 00:05:14.622 --rc geninfo_all_blocks=1 00:05:14.622 --rc geninfo_unexecuted_blocks=1 00:05:14.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.622 ' 00:05:14.622 23:05:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.622 --rc genhtml_branch_coverage=1 00:05:14.622 --rc genhtml_function_coverage=1 00:05:14.622 --rc genhtml_legend=1 00:05:14.622 --rc geninfo_all_blocks=1 00:05:14.622 --rc geninfo_unexecuted_blocks=1 00:05:14.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.622 ' 00:05:14.622 23:05:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.622 --rc genhtml_branch_coverage=1 00:05:14.622 --rc genhtml_function_coverage=1 00:05:14.622 --rc genhtml_legend=1 00:05:14.622 --rc geninfo_all_blocks=1 00:05:14.622 --rc geninfo_unexecuted_blocks=1 00:05:14.622 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.622 ' 00:05:14.622 23:05:11 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:14.622 OK 00:05:14.622 23:05:11 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:14.622 00:05:14.622 real 0m0.210s 00:05:14.622 user 0m0.117s 00:05:14.622 sys 0m0.106s 00:05:14.622 23:05:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.622 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.622 ************************************ 00:05:14.622 END TEST rpc_client 00:05:14.622 ************************************ 00:05:14.622 23:05:11 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.622 23:05:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.622 23:05:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.622 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.622 ************************************ 00:05:14.622 START TEST json_config 00:05:14.622 ************************************ 00:05:14.622 23:05:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.622 23:05:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.622 23:05:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.622 23:05:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.883 23:05:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.883 23:05:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.883 23:05:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.883 23:05:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.883 23:05:11 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.883 23:05:11 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.883 23:05:11 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.883 23:05:11 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.883 23:05:11 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.883 23:05:11 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.883 23:05:11 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.883 23:05:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.883 23:05:11 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.883 23:05:11 -- scripts/common.sh@344 -- # : 1 00:05:14.883 23:05:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.883 23:05:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.883 23:05:11 -- scripts/common.sh@364 -- # decimal 1 00:05:14.883 23:05:11 -- scripts/common.sh@352 -- # local d=1 00:05:14.883 23:05:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.883 23:05:11 -- scripts/common.sh@354 -- # echo 1 00:05:14.883 23:05:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.883 23:05:11 -- scripts/common.sh@365 -- # decimal 2 00:05:14.883 23:05:11 -- scripts/common.sh@352 -- # local d=2 00:05:14.883 23:05:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.883 23:05:11 -- scripts/common.sh@354 -- # echo 2 00:05:14.883 23:05:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.883 23:05:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.883 23:05:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.883 23:05:11 -- scripts/common.sh@367 -- # return 0 00:05:14.883 23:05:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.883 23:05:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.883 --rc genhtml_branch_coverage=1 00:05:14.883 --rc genhtml_function_coverage=1 00:05:14.883 --rc genhtml_legend=1 00:05:14.883 --rc geninfo_all_blocks=1 00:05:14.883 --rc geninfo_unexecuted_blocks=1 00:05:14.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.883 ' 00:05:14.883 23:05:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.883 --rc genhtml_branch_coverage=1 00:05:14.883 --rc genhtml_function_coverage=1 00:05:14.883 --rc genhtml_legend=1 00:05:14.883 --rc geninfo_all_blocks=1 00:05:14.883 --rc geninfo_unexecuted_blocks=1 00:05:14.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.883 ' 00:05:14.883 23:05:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.883 --rc genhtml_branch_coverage=1 00:05:14.883 --rc genhtml_function_coverage=1 00:05:14.883 --rc genhtml_legend=1 00:05:14.883 --rc geninfo_all_blocks=1 00:05:14.883 --rc geninfo_unexecuted_blocks=1 00:05:14.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.883 ' 00:05:14.883 23:05:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.883 --rc genhtml_branch_coverage=1 00:05:14.883 --rc genhtml_function_coverage=1 00:05:14.883 --rc genhtml_legend=1 00:05:14.883 --rc geninfo_all_blocks=1 00:05:14.883 --rc geninfo_unexecuted_blocks=1 00:05:14.883 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.883 ' 00:05:14.883 23:05:11 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.883 23:05:11 -- nvmf/common.sh@7 -- # uname -s 00:05:14.883 23:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.883 23:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.883 23:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.883 23:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.883 23:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.883 23:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.883 23:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.883 23:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.883 23:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.883 23:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.883 23:05:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:14.883 23:05:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:14.883 23:05:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.883 23:05:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.883 23:05:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.883 23:05:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:14.883 23:05:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.883 23:05:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.883 23:05:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.883 23:05:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.883 23:05:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.883 23:05:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.883 23:05:11 -- paths/export.sh@5 -- # export PATH 00:05:14.883 23:05:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.883 23:05:11 -- nvmf/common.sh@46 -- # : 0 00:05:14.883 23:05:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:14.883 23:05:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:14.883 23:05:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:14.883 23:05:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.883 23:05:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.883 23:05:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:14.883 23:05:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:14.883 23:05:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:14.883 23:05:11 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:14.883 23:05:11 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:14.883 23:05:11 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:14.883 23:05:11 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.883 23:05:11 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:14.883 WARNING: No tests are enabled so not running JSON configuration tests 00:05:14.883 23:05:11 -- json_config/json_config.sh@27 -- # exit 0 00:05:14.883 00:05:14.883 real 0m0.178s 00:05:14.883 user 0m0.105s 00:05:14.883 sys 0m0.077s 00:05:14.884 23:05:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:14.884 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.884 ************************************ 00:05:14.884 END TEST json_config 00:05:14.884 ************************************ 00:05:14.884 23:05:11 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.884 23:05:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.884 23:05:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.884 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:14.884 ************************************ 00:05:14.884 START TEST json_config_extra_key 00:05:14.884 ************************************ 00:05:14.884 23:05:11 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:14.884 23:05:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:14.884 23:05:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:14.884 23:05:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:14.884 23:05:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:14.884 23:05:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:14.884 23:05:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:14.884 23:05:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:14.884 23:05:11 -- scripts/common.sh@335 -- # IFS=.-: 00:05:14.884 23:05:11 -- scripts/common.sh@335 -- # read -ra ver1 00:05:14.884 23:05:11 -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.884 23:05:11 -- scripts/common.sh@336 -- # read -ra ver2 00:05:14.884 23:05:11 -- scripts/common.sh@337 -- # local 'op=<' 00:05:14.884 23:05:11 -- scripts/common.sh@339 -- # ver1_l=2 00:05:14.884 23:05:11 -- scripts/common.sh@340 -- # ver2_l=1 00:05:14.884 23:05:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:14.884 23:05:11 -- scripts/common.sh@343 -- # case "$op" in 00:05:14.884 23:05:11 -- scripts/common.sh@344 -- # : 1 00:05:14.884 23:05:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:14.884 23:05:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.884 23:05:11 -- scripts/common.sh@364 -- # decimal 1 00:05:14.884 23:05:11 -- scripts/common.sh@352 -- # local d=1 00:05:14.884 23:05:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.884 23:05:11 -- scripts/common.sh@354 -- # echo 1 00:05:14.884 23:05:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:14.884 23:05:11 -- scripts/common.sh@365 -- # decimal 2 00:05:14.884 23:05:11 -- scripts/common.sh@352 -- # local d=2 00:05:14.884 23:05:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.884 23:05:11 -- scripts/common.sh@354 -- # echo 2 00:05:14.884 23:05:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:14.884 23:05:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:14.884 23:05:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:14.884 23:05:11 -- scripts/common.sh@367 -- # return 0 00:05:14.884 23:05:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.884 23:05:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.884 --rc genhtml_branch_coverage=1 00:05:14.884 --rc genhtml_function_coverage=1 00:05:14.884 --rc genhtml_legend=1 00:05:14.884 --rc geninfo_all_blocks=1 00:05:14.884 --rc geninfo_unexecuted_blocks=1 00:05:14.884 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.884 ' 00:05:14.884 23:05:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.884 --rc genhtml_branch_coverage=1 00:05:14.884 --rc genhtml_function_coverage=1 00:05:14.884 --rc genhtml_legend=1 00:05:14.884 --rc geninfo_all_blocks=1 00:05:14.884 --rc geninfo_unexecuted_blocks=1 00:05:14.884 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.884 ' 00:05:14.884 23:05:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.884 --rc genhtml_branch_coverage=1 00:05:14.884 --rc genhtml_function_coverage=1 00:05:14.884 --rc genhtml_legend=1 00:05:14.884 --rc geninfo_all_blocks=1 00:05:14.884 --rc geninfo_unexecuted_blocks=1 00:05:14.884 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.884 ' 00:05:14.884 23:05:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:14.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.884 --rc genhtml_branch_coverage=1 00:05:14.884 --rc genhtml_function_coverage=1 00:05:14.884 --rc genhtml_legend=1 00:05:14.884 --rc geninfo_all_blocks=1 00:05:14.884 --rc geninfo_unexecuted_blocks=1 00:05:14.884 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:14.884 ' 00:05:14.884 23:05:11 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.884 23:05:11 -- nvmf/common.sh@7 -- # uname -s 00:05:14.884 23:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.884 23:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.884 23:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.884 23:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.884 23:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.884 23:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.884 23:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.884 23:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.884 23:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.884 23:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.145 23:05:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:809b5fbc-4be7-e711-906e-0017a4403562 00:05:15.145 23:05:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=809b5fbc-4be7-e711-906e-0017a4403562 00:05:15.145 23:05:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.145 23:05:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.145 23:05:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.145 23:05:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:05:15.145 23:05:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.145 23:05:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.145 23:05:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.145 23:05:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.145 23:05:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.145 23:05:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.145 23:05:11 -- paths/export.sh@5 -- # export PATH 00:05:15.145 23:05:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.145 23:05:11 -- nvmf/common.sh@46 -- # : 0 00:05:15.145 23:05:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:15.145 23:05:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:15.145 23:05:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:15.145 23:05:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.145 23:05:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.145 23:05:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:15.145 23:05:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:15.145 23:05:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:15.145 INFO: launching applications... 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1273776 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:15.145 Waiting for target to run... 00:05:15.145 23:05:11 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1273776 /var/tmp/spdk_tgt.sock 00:05:15.145 23:05:11 -- common/autotest_common.sh@829 -- # '[' -z 1273776 ']' 00:05:15.145 23:05:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.145 23:05:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.145 23:05:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.145 23:05:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.145 23:05:11 -- common/autotest_common.sh@10 -- # set +x 00:05:15.145 [2024-11-17 23:05:11.524447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:15.145 [2024-11-17 23:05:11.524503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273776 ] 00:05:15.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.405 [2024-11-17 23:05:11.808442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.405 [2024-11-17 23:05:11.868869] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:15.405 [2024-11-17 23:05:11.868956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.973 23:05:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.973 23:05:12 -- common/autotest_common.sh@862 -- # return 0 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:15.973 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:15.973 INFO: shutting down applications... 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1273776 ]] 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1273776 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1273776 00:05:15.973 23:05:12 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1273776 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:16.543 SPDK target shutdown done 00:05:16.543 23:05:12 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:16.543 Success 00:05:16.543 00:05:16.543 real 0m1.536s 00:05:16.543 user 0m1.277s 00:05:16.543 sys 0m0.401s 00:05:16.543 23:05:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.543 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.543 ************************************ 00:05:16.543 END TEST json_config_extra_key 00:05:16.543 ************************************ 00:05:16.543 23:05:12 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.543 23:05:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.543 23:05:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.543 23:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.543 ************************************ 00:05:16.543 START TEST alias_rpc 00:05:16.543 ************************************ 00:05:16.543 23:05:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:16.543 * Looking for test storage... 00:05:16.543 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:05:16.543 23:05:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:16.543 23:05:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:16.543 23:05:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:16.543 23:05:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:16.543 23:05:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:16.543 23:05:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:16.543 23:05:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:16.543 23:05:13 -- scripts/common.sh@335 -- # IFS=.-: 00:05:16.543 23:05:13 -- scripts/common.sh@335 -- # read -ra ver1 00:05:16.543 23:05:13 -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.543 23:05:13 -- scripts/common.sh@336 -- # read -ra ver2 00:05:16.543 23:05:13 -- scripts/common.sh@337 -- # local 'op=<' 00:05:16.543 23:05:13 -- scripts/common.sh@339 -- # ver1_l=2 00:05:16.543 23:05:13 -- scripts/common.sh@340 -- # ver2_l=1 00:05:16.543 23:05:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:16.543 23:05:13 -- scripts/common.sh@343 -- # case "$op" in 00:05:16.543 23:05:13 -- scripts/common.sh@344 -- # : 1 00:05:16.543 23:05:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:16.543 23:05:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.544 23:05:13 -- scripts/common.sh@364 -- # decimal 1 00:05:16.544 23:05:13 -- scripts/common.sh@352 -- # local d=1 00:05:16.544 23:05:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.544 23:05:13 -- scripts/common.sh@354 -- # echo 1 00:05:16.544 23:05:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:16.544 23:05:13 -- scripts/common.sh@365 -- # decimal 2 00:05:16.544 23:05:13 -- scripts/common.sh@352 -- # local d=2 00:05:16.544 23:05:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.544 23:05:13 -- scripts/common.sh@354 -- # echo 2 00:05:16.544 23:05:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:16.544 23:05:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:16.544 23:05:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:16.544 23:05:13 -- scripts/common.sh@367 -- # return 0 00:05:16.544 23:05:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.544 23:05:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:16.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.544 --rc genhtml_branch_coverage=1 00:05:16.544 --rc genhtml_function_coverage=1 00:05:16.544 --rc genhtml_legend=1 00:05:16.544 --rc geninfo_all_blocks=1 00:05:16.544 --rc geninfo_unexecuted_blocks=1 00:05:16.544 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:16.544 ' 00:05:16.544 23:05:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:16.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.544 --rc genhtml_branch_coverage=1 00:05:16.544 --rc genhtml_function_coverage=1 00:05:16.544 --rc genhtml_legend=1 00:05:16.544 --rc geninfo_all_blocks=1 00:05:16.544 --rc geninfo_unexecuted_blocks=1 00:05:16.544 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:16.544 ' 00:05:16.544 23:05:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:16.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.544 --rc genhtml_branch_coverage=1 00:05:16.544 --rc genhtml_function_coverage=1 00:05:16.544 --rc genhtml_legend=1 00:05:16.544 --rc geninfo_all_blocks=1 00:05:16.544 --rc geninfo_unexecuted_blocks=1 00:05:16.544 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:16.544 ' 00:05:16.544 23:05:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:16.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.544 --rc genhtml_branch_coverage=1 00:05:16.544 --rc genhtml_function_coverage=1 00:05:16.544 --rc genhtml_legend=1 00:05:16.544 --rc geninfo_all_blocks=1 00:05:16.544 --rc geninfo_unexecuted_blocks=1 00:05:16.544 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:16.544 ' 00:05:16.544 23:05:13 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.544 23:05:13 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1274101 00:05:16.544 23:05:13 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.544 23:05:13 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1274101 00:05:16.544 23:05:13 -- common/autotest_common.sh@829 -- # '[' -z 1274101 ']' 00:05:16.544 23:05:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.544 23:05:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.544 23:05:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.544 23:05:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.544 23:05:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.544 [2024-11-17 23:05:13.136713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.544 [2024-11-17 23:05:13.136802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274101 ] 00:05:16.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.803 [2024-11-17 23:05:13.205371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.803 [2024-11-17 23:05:13.279227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:16.803 [2024-11-17 23:05:13.279329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.372 23:05:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.372 23:05:13 -- common/autotest_common.sh@862 -- # return 0 00:05:17.372 23:05:13 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:17.631 23:05:14 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1274101 00:05:17.632 23:05:14 -- common/autotest_common.sh@936 -- # '[' -z 1274101 ']' 00:05:17.632 23:05:14 -- common/autotest_common.sh@940 -- # kill -0 1274101 00:05:17.632 23:05:14 -- common/autotest_common.sh@941 -- # uname 00:05:17.632 23:05:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.632 23:05:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1274101 00:05:17.632 23:05:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.632 23:05:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.632 23:05:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1274101' 00:05:17.632 killing process with pid 1274101 00:05:17.632 23:05:14 -- common/autotest_common.sh@955 -- # kill 1274101 00:05:17.632 23:05:14 -- common/autotest_common.sh@960 -- # wait 1274101 00:05:18.200 00:05:18.200 real 0m1.609s 00:05:18.200 user 0m1.691s 00:05:18.200 sys 0m0.499s 00:05:18.200 23:05:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.200 23:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.200 ************************************ 00:05:18.200 END TEST alias_rpc 00:05:18.200 ************************************ 00:05:18.200 23:05:14 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:18.200 23:05:14 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.200 23:05:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.200 23:05:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.200 23:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.200 ************************************ 00:05:18.200 START TEST spdkcli_tcp 00:05:18.200 ************************************ 00:05:18.200 23:05:14 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:18.200 * Looking for test storage... 00:05:18.200 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:05:18.201 23:05:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:18.201 23:05:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:18.201 23:05:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:18.201 23:05:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:18.201 23:05:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:18.201 23:05:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:18.201 23:05:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:18.201 23:05:14 -- scripts/common.sh@335 -- # IFS=.-: 00:05:18.201 23:05:14 -- scripts/common.sh@335 -- # read -ra ver1 00:05:18.201 23:05:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.201 23:05:14 -- scripts/common.sh@336 -- # read -ra ver2 00:05:18.201 23:05:14 -- scripts/common.sh@337 -- # local 'op=<' 00:05:18.201 23:05:14 -- scripts/common.sh@339 -- # ver1_l=2 00:05:18.201 23:05:14 -- scripts/common.sh@340 -- # ver2_l=1 00:05:18.201 23:05:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:18.201 23:05:14 -- scripts/common.sh@343 -- # case "$op" in 00:05:18.201 23:05:14 -- scripts/common.sh@344 -- # : 1 00:05:18.201 23:05:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:18.201 23:05:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.201 23:05:14 -- scripts/common.sh@364 -- # decimal 1 00:05:18.201 23:05:14 -- scripts/common.sh@352 -- # local d=1 00:05:18.201 23:05:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.201 23:05:14 -- scripts/common.sh@354 -- # echo 1 00:05:18.201 23:05:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:18.201 23:05:14 -- scripts/common.sh@365 -- # decimal 2 00:05:18.201 23:05:14 -- scripts/common.sh@352 -- # local d=2 00:05:18.201 23:05:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.201 23:05:14 -- scripts/common.sh@354 -- # echo 2 00:05:18.201 23:05:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:18.201 23:05:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:18.201 23:05:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:18.201 23:05:14 -- scripts/common.sh@367 -- # return 0 00:05:18.201 23:05:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.201 23:05:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.201 --rc genhtml_branch_coverage=1 00:05:18.201 --rc genhtml_function_coverage=1 00:05:18.201 --rc genhtml_legend=1 00:05:18.201 --rc geninfo_all_blocks=1 00:05:18.201 --rc geninfo_unexecuted_blocks=1 00:05:18.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.201 ' 00:05:18.201 23:05:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.201 --rc genhtml_branch_coverage=1 00:05:18.201 --rc genhtml_function_coverage=1 00:05:18.201 --rc genhtml_legend=1 00:05:18.201 --rc geninfo_all_blocks=1 00:05:18.201 --rc geninfo_unexecuted_blocks=1 00:05:18.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.201 ' 00:05:18.201 23:05:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.201 --rc genhtml_branch_coverage=1 00:05:18.201 --rc genhtml_function_coverage=1 00:05:18.201 --rc genhtml_legend=1 00:05:18.201 --rc geninfo_all_blocks=1 00:05:18.201 --rc geninfo_unexecuted_blocks=1 00:05:18.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.201 ' 00:05:18.201 23:05:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:18.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.201 --rc genhtml_branch_coverage=1 00:05:18.201 --rc genhtml_function_coverage=1 00:05:18.201 --rc genhtml_legend=1 00:05:18.201 --rc geninfo_all_blocks=1 00:05:18.201 --rc geninfo_unexecuted_blocks=1 00:05:18.201 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:18.201 ' 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:05:18.201 23:05:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:18.201 23:05:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:18.201 23:05:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.201 23:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1274438 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:18.201 23:05:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 1274438 00:05:18.201 23:05:14 -- common/autotest_common.sh@829 -- # '[' -z 1274438 ']' 00:05:18.201 23:05:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.201 23:05:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.201 23:05:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.201 23:05:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.201 23:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 [2024-11-17 23:05:14.790339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:18.201 [2024-11-17 23:05:14.790408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274438 ] 00:05:18.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.460 [2024-11-17 23:05:14.857453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.460 [2024-11-17 23:05:14.926726] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:18.460 [2024-11-17 23:05:14.926866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.460 [2024-11-17 23:05:14.926868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.029 23:05:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.029 23:05:15 -- common/autotest_common.sh@862 -- # return 0 00:05:19.029 23:05:15 -- spdkcli/tcp.sh@31 -- # socat_pid=1274698 00:05:19.029 23:05:15 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:19.029 23:05:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:19.291 [ 00:05:19.291 "spdk_get_version", 00:05:19.291 "rpc_get_methods", 00:05:19.291 "trace_get_info", 00:05:19.291 "trace_get_tpoint_group_mask", 00:05:19.291 "trace_disable_tpoint_group", 00:05:19.291 "trace_enable_tpoint_group", 00:05:19.291 "trace_clear_tpoint_mask", 00:05:19.291 "trace_set_tpoint_mask", 00:05:19.291 "vfu_tgt_set_base_path", 00:05:19.291 "framework_get_pci_devices", 00:05:19.291 "framework_get_config", 00:05:19.291 "framework_get_subsystems", 00:05:19.291 "iobuf_get_stats", 00:05:19.291 "iobuf_set_options", 00:05:19.291 "sock_set_default_impl", 00:05:19.291 "sock_impl_set_options", 00:05:19.291 "sock_impl_get_options", 00:05:19.291 "vmd_rescan", 00:05:19.291 "vmd_remove_device", 00:05:19.291 "vmd_enable", 00:05:19.291 "accel_get_stats", 00:05:19.291 "accel_set_options", 00:05:19.291 "accel_set_driver", 00:05:19.291 "accel_crypto_key_destroy", 00:05:19.291 "accel_crypto_keys_get", 00:05:19.291 "accel_crypto_key_create", 00:05:19.291 "accel_assign_opc", 00:05:19.291 "accel_get_module_info", 00:05:19.291 "accel_get_opc_assignments", 00:05:19.291 "notify_get_notifications", 00:05:19.291 "notify_get_types", 00:05:19.291 "bdev_get_histogram", 00:05:19.291 "bdev_enable_histogram", 00:05:19.291 "bdev_set_qos_limit", 00:05:19.291 "bdev_set_qd_sampling_period", 00:05:19.291 "bdev_get_bdevs", 00:05:19.291 "bdev_reset_iostat", 00:05:19.291 "bdev_get_iostat", 00:05:19.291 "bdev_examine", 00:05:19.291 "bdev_wait_for_examine", 00:05:19.291 "bdev_set_options", 00:05:19.291 "scsi_get_devices", 00:05:19.291 "thread_set_cpumask", 00:05:19.291 "framework_get_scheduler", 00:05:19.291 "framework_set_scheduler", 00:05:19.291 "framework_get_reactors", 00:05:19.291 "thread_get_io_channels", 00:05:19.291 "thread_get_pollers", 00:05:19.291 "thread_get_stats", 00:05:19.291 "framework_monitor_context_switch", 00:05:19.291 "spdk_kill_instance", 00:05:19.291 "log_enable_timestamps", 00:05:19.291 "log_get_flags", 00:05:19.291 "log_clear_flag", 00:05:19.291 "log_set_flag", 00:05:19.291 "log_get_level", 00:05:19.291 "log_set_level", 00:05:19.291 "log_get_print_level", 00:05:19.291 "log_set_print_level", 00:05:19.291 "framework_enable_cpumask_locks", 00:05:19.291 "framework_disable_cpumask_locks", 00:05:19.291 "framework_wait_init", 00:05:19.291 "framework_start_init", 00:05:19.291 "virtio_blk_create_transport", 00:05:19.291 "virtio_blk_get_transports", 00:05:19.291 "vhost_controller_set_coalescing", 00:05:19.291 "vhost_get_controllers", 00:05:19.291 "vhost_delete_controller", 00:05:19.291 "vhost_create_blk_controller", 00:05:19.291 "vhost_scsi_controller_remove_target", 00:05:19.291 "vhost_scsi_controller_add_target", 00:05:19.291 "vhost_start_scsi_controller", 00:05:19.291 "vhost_create_scsi_controller", 00:05:19.291 "ublk_recover_disk", 00:05:19.291 "ublk_get_disks", 00:05:19.291 "ublk_stop_disk", 00:05:19.291 "ublk_start_disk", 00:05:19.291 "ublk_destroy_target", 00:05:19.291 "ublk_create_target", 00:05:19.291 "nbd_get_disks", 00:05:19.291 "nbd_stop_disk", 00:05:19.291 "nbd_start_disk", 00:05:19.291 "env_dpdk_get_mem_stats", 00:05:19.291 "nvmf_subsystem_get_listeners", 00:05:19.291 "nvmf_subsystem_get_qpairs", 00:05:19.291 "nvmf_subsystem_get_controllers", 00:05:19.291 "nvmf_get_stats", 00:05:19.291 "nvmf_get_transports", 00:05:19.291 "nvmf_create_transport", 00:05:19.291 "nvmf_get_targets", 00:05:19.291 "nvmf_delete_target", 00:05:19.291 "nvmf_create_target", 00:05:19.291 "nvmf_subsystem_allow_any_host", 00:05:19.291 "nvmf_subsystem_remove_host", 00:05:19.291 "nvmf_subsystem_add_host", 00:05:19.291 "nvmf_subsystem_remove_ns", 00:05:19.291 "nvmf_subsystem_add_ns", 00:05:19.291 "nvmf_subsystem_listener_set_ana_state", 00:05:19.291 "nvmf_discovery_get_referrals", 00:05:19.291 "nvmf_discovery_remove_referral", 00:05:19.291 "nvmf_discovery_add_referral", 00:05:19.291 "nvmf_subsystem_remove_listener", 00:05:19.291 "nvmf_subsystem_add_listener", 00:05:19.292 "nvmf_delete_subsystem", 00:05:19.292 "nvmf_create_subsystem", 00:05:19.292 "nvmf_get_subsystems", 00:05:19.292 "nvmf_set_crdt", 00:05:19.292 "nvmf_set_config", 00:05:19.292 "nvmf_set_max_subsystems", 00:05:19.292 "iscsi_set_options", 00:05:19.292 "iscsi_get_auth_groups", 00:05:19.292 "iscsi_auth_group_remove_secret", 00:05:19.292 "iscsi_auth_group_add_secret", 00:05:19.292 "iscsi_delete_auth_group", 00:05:19.292 "iscsi_create_auth_group", 00:05:19.292 "iscsi_set_discovery_auth", 00:05:19.292 "iscsi_get_options", 00:05:19.292 "iscsi_target_node_request_logout", 00:05:19.292 "iscsi_target_node_set_redirect", 00:05:19.292 "iscsi_target_node_set_auth", 00:05:19.292 "iscsi_target_node_add_lun", 00:05:19.292 "iscsi_get_connections", 00:05:19.292 "iscsi_portal_group_set_auth", 00:05:19.292 "iscsi_start_portal_group", 00:05:19.292 "iscsi_delete_portal_group", 00:05:19.292 "iscsi_create_portal_group", 00:05:19.292 "iscsi_get_portal_groups", 00:05:19.292 "iscsi_delete_target_node", 00:05:19.292 "iscsi_target_node_remove_pg_ig_maps", 00:05:19.292 "iscsi_target_node_add_pg_ig_maps", 00:05:19.292 "iscsi_create_target_node", 00:05:19.292 "iscsi_get_target_nodes", 00:05:19.292 "iscsi_delete_initiator_group", 00:05:19.292 "iscsi_initiator_group_remove_initiators", 00:05:19.292 "iscsi_initiator_group_add_initiators", 00:05:19.292 "iscsi_create_initiator_group", 00:05:19.292 "iscsi_get_initiator_groups", 00:05:19.292 "vfu_virtio_create_scsi_endpoint", 00:05:19.292 "vfu_virtio_scsi_remove_target", 00:05:19.292 "vfu_virtio_scsi_add_target", 00:05:19.292 "vfu_virtio_create_blk_endpoint", 00:05:19.292 "vfu_virtio_delete_endpoint", 00:05:19.292 "iaa_scan_accel_module", 00:05:19.292 "dsa_scan_accel_module", 00:05:19.292 "ioat_scan_accel_module", 00:05:19.292 "accel_error_inject_error", 00:05:19.292 "bdev_iscsi_delete", 00:05:19.292 "bdev_iscsi_create", 00:05:19.292 "bdev_iscsi_set_options", 00:05:19.292 "bdev_virtio_attach_controller", 00:05:19.292 "bdev_virtio_scsi_get_devices", 00:05:19.292 "bdev_virtio_detach_controller", 00:05:19.292 "bdev_virtio_blk_set_hotplug", 00:05:19.292 "bdev_ftl_set_property", 00:05:19.292 "bdev_ftl_get_properties", 00:05:19.292 "bdev_ftl_get_stats", 00:05:19.292 "bdev_ftl_unmap", 00:05:19.292 "bdev_ftl_unload", 00:05:19.292 "bdev_ftl_delete", 00:05:19.292 "bdev_ftl_load", 00:05:19.292 "bdev_ftl_create", 00:05:19.292 "bdev_aio_delete", 00:05:19.292 "bdev_aio_rescan", 00:05:19.292 "bdev_aio_create", 00:05:19.292 "blobfs_create", 00:05:19.292 "blobfs_detect", 00:05:19.292 "blobfs_set_cache_size", 00:05:19.292 "bdev_zone_block_delete", 00:05:19.292 "bdev_zone_block_create", 00:05:19.292 "bdev_delay_delete", 00:05:19.292 "bdev_delay_create", 00:05:19.292 "bdev_delay_update_latency", 00:05:19.292 "bdev_split_delete", 00:05:19.292 "bdev_split_create", 00:05:19.292 "bdev_error_inject_error", 00:05:19.292 "bdev_error_delete", 00:05:19.292 "bdev_error_create", 00:05:19.292 "bdev_raid_set_options", 00:05:19.292 "bdev_raid_remove_base_bdev", 00:05:19.292 "bdev_raid_add_base_bdev", 00:05:19.292 "bdev_raid_delete", 00:05:19.292 "bdev_raid_create", 00:05:19.292 "bdev_raid_get_bdevs", 00:05:19.292 "bdev_lvol_grow_lvstore", 00:05:19.292 "bdev_lvol_get_lvols", 00:05:19.292 "bdev_lvol_get_lvstores", 00:05:19.292 "bdev_lvol_delete", 00:05:19.292 "bdev_lvol_set_read_only", 00:05:19.292 "bdev_lvol_resize", 00:05:19.292 "bdev_lvol_decouple_parent", 00:05:19.292 "bdev_lvol_inflate", 00:05:19.292 "bdev_lvol_rename", 00:05:19.292 "bdev_lvol_clone_bdev", 00:05:19.292 "bdev_lvol_clone", 00:05:19.292 "bdev_lvol_snapshot", 00:05:19.292 "bdev_lvol_create", 00:05:19.292 "bdev_lvol_delete_lvstore", 00:05:19.292 "bdev_lvol_rename_lvstore", 00:05:19.292 "bdev_lvol_create_lvstore", 00:05:19.292 "bdev_passthru_delete", 00:05:19.292 "bdev_passthru_create", 00:05:19.292 "bdev_nvme_cuse_unregister", 00:05:19.292 "bdev_nvme_cuse_register", 00:05:19.292 "bdev_opal_new_user", 00:05:19.292 "bdev_opal_set_lock_state", 00:05:19.292 "bdev_opal_delete", 00:05:19.292 "bdev_opal_get_info", 00:05:19.292 "bdev_opal_create", 00:05:19.292 "bdev_nvme_opal_revert", 00:05:19.292 "bdev_nvme_opal_init", 00:05:19.292 "bdev_nvme_send_cmd", 00:05:19.292 "bdev_nvme_get_path_iostat", 00:05:19.292 "bdev_nvme_get_mdns_discovery_info", 00:05:19.292 "bdev_nvme_stop_mdns_discovery", 00:05:19.292 "bdev_nvme_start_mdns_discovery", 00:05:19.292 "bdev_nvme_set_multipath_policy", 00:05:19.292 "bdev_nvme_set_preferred_path", 00:05:19.292 "bdev_nvme_get_io_paths", 00:05:19.292 "bdev_nvme_remove_error_injection", 00:05:19.292 "bdev_nvme_add_error_injection", 00:05:19.292 "bdev_nvme_get_discovery_info", 00:05:19.292 "bdev_nvme_stop_discovery", 00:05:19.292 "bdev_nvme_start_discovery", 00:05:19.292 "bdev_nvme_get_controller_health_info", 00:05:19.292 "bdev_nvme_disable_controller", 00:05:19.292 "bdev_nvme_enable_controller", 00:05:19.292 "bdev_nvme_reset_controller", 00:05:19.292 "bdev_nvme_get_transport_statistics", 00:05:19.292 "bdev_nvme_apply_firmware", 00:05:19.292 "bdev_nvme_detach_controller", 00:05:19.292 "bdev_nvme_get_controllers", 00:05:19.292 "bdev_nvme_attach_controller", 00:05:19.292 "bdev_nvme_set_hotplug", 00:05:19.292 "bdev_nvme_set_options", 00:05:19.292 "bdev_null_resize", 00:05:19.292 "bdev_null_delete", 00:05:19.292 "bdev_null_create", 00:05:19.292 "bdev_malloc_delete", 00:05:19.292 "bdev_malloc_create" 00:05:19.292 ] 00:05:19.292 23:05:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:19.292 23:05:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:19.292 23:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:19.292 23:05:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:19.292 23:05:15 -- spdkcli/tcp.sh@38 -- # killprocess 1274438 00:05:19.292 23:05:15 -- common/autotest_common.sh@936 -- # '[' -z 1274438 ']' 00:05:19.292 23:05:15 -- common/autotest_common.sh@940 -- # kill -0 1274438 00:05:19.292 23:05:15 -- common/autotest_common.sh@941 -- # uname 00:05:19.292 23:05:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:19.292 23:05:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1274438 00:05:19.551 23:05:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:19.551 23:05:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:19.551 23:05:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1274438' 00:05:19.551 killing process with pid 1274438 00:05:19.551 23:05:15 -- common/autotest_common.sh@955 -- # kill 1274438 00:05:19.551 23:05:15 -- common/autotest_common.sh@960 -- # wait 1274438 00:05:19.811 00:05:19.811 real 0m1.628s 00:05:19.811 user 0m2.946s 00:05:19.811 sys 0m0.526s 00:05:19.811 23:05:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:19.811 23:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.811 ************************************ 00:05:19.811 END TEST spdkcli_tcp 00:05:19.811 ************************************ 00:05:19.811 23:05:16 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.811 23:05:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:19.811 23:05:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.811 23:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:19.811 ************************************ 00:05:19.811 START TEST dpdk_mem_utility 00:05:19.811 ************************************ 00:05:19.811 23:05:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:19.811 * Looking for test storage... 00:05:19.811 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:05:19.811 23:05:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:19.811 23:05:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:19.811 23:05:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:19.811 23:05:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:19.811 23:05:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:19.811 23:05:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:19.811 23:05:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:19.811 23:05:16 -- scripts/common.sh@335 -- # IFS=.-: 00:05:19.811 23:05:16 -- scripts/common.sh@335 -- # read -ra ver1 00:05:19.811 23:05:16 -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.811 23:05:16 -- scripts/common.sh@336 -- # read -ra ver2 00:05:19.811 23:05:16 -- scripts/common.sh@337 -- # local 'op=<' 00:05:19.811 23:05:16 -- scripts/common.sh@339 -- # ver1_l=2 00:05:19.811 23:05:16 -- scripts/common.sh@340 -- # ver2_l=1 00:05:19.811 23:05:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:19.811 23:05:16 -- scripts/common.sh@343 -- # case "$op" in 00:05:19.811 23:05:16 -- scripts/common.sh@344 -- # : 1 00:05:19.811 23:05:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:19.811 23:05:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.072 23:05:16 -- scripts/common.sh@364 -- # decimal 1 00:05:20.072 23:05:16 -- scripts/common.sh@352 -- # local d=1 00:05:20.072 23:05:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.072 23:05:16 -- scripts/common.sh@354 -- # echo 1 00:05:20.072 23:05:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:20.072 23:05:16 -- scripts/common.sh@365 -- # decimal 2 00:05:20.072 23:05:16 -- scripts/common.sh@352 -- # local d=2 00:05:20.072 23:05:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.072 23:05:16 -- scripts/common.sh@354 -- # echo 2 00:05:20.072 23:05:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:20.072 23:05:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:20.072 23:05:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:20.072 23:05:16 -- scripts/common.sh@367 -- # return 0 00:05:20.072 23:05:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.072 23:05:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.072 --rc genhtml_branch_coverage=1 00:05:20.072 --rc genhtml_function_coverage=1 00:05:20.072 --rc genhtml_legend=1 00:05:20.072 --rc geninfo_all_blocks=1 00:05:20.072 --rc geninfo_unexecuted_blocks=1 00:05:20.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.072 ' 00:05:20.072 23:05:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.072 --rc genhtml_branch_coverage=1 00:05:20.072 --rc genhtml_function_coverage=1 00:05:20.072 --rc genhtml_legend=1 00:05:20.072 --rc geninfo_all_blocks=1 00:05:20.072 --rc geninfo_unexecuted_blocks=1 00:05:20.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.072 ' 00:05:20.072 23:05:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.072 --rc genhtml_branch_coverage=1 00:05:20.072 --rc genhtml_function_coverage=1 00:05:20.072 --rc genhtml_legend=1 00:05:20.072 --rc geninfo_all_blocks=1 00:05:20.072 --rc geninfo_unexecuted_blocks=1 00:05:20.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.072 ' 00:05:20.072 23:05:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:20.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.072 --rc genhtml_branch_coverage=1 00:05:20.072 --rc genhtml_function_coverage=1 00:05:20.072 --rc genhtml_legend=1 00:05:20.072 --rc geninfo_all_blocks=1 00:05:20.072 --rc geninfo_unexecuted_blocks=1 00:05:20.072 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:20.072 ' 00:05:20.072 23:05:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:20.072 23:05:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1274783 00:05:20.072 23:05:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1274783 00:05:20.072 23:05:16 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:05:20.072 23:05:16 -- common/autotest_common.sh@829 -- # '[' -z 1274783 ']' 00:05:20.072 23:05:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.072 23:05:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.072 23:05:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.072 23:05:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.072 23:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:20.072 [2024-11-17 23:05:16.463091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:20.072 [2024-11-17 23:05:16.463164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274783 ] 00:05:20.072 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.072 [2024-11-17 23:05:16.532481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.072 [2024-11-17 23:05:16.601180] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:20.072 [2024-11-17 23:05:16.601290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.012 23:05:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.012 23:05:17 -- common/autotest_common.sh@862 -- # return 0 00:05:21.012 23:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:21.012 23:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:21.012 23:05:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.012 23:05:17 -- common/autotest_common.sh@10 -- # set +x 00:05:21.012 { 00:05:21.012 "filename": "/tmp/spdk_mem_dump.txt" 00:05:21.012 } 00:05:21.012 23:05:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.012 23:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.012 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:21.012 1 heaps totaling size 814.000000 MiB 00:05:21.012 size: 814.000000 MiB heap id: 0 00:05:21.012 end heaps---------- 00:05:21.012 8 mempools totaling size 598.116089 MiB 00:05:21.012 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:21.012 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:21.012 size: 84.521057 MiB name: bdev_io_1274783 00:05:21.012 size: 51.011292 MiB name: evtpool_1274783 00:05:21.012 size: 50.003479 MiB name: msgpool_1274783 00:05:21.012 size: 21.763794 MiB name: PDU_Pool 00:05:21.012 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:21.012 size: 0.026123 MiB name: Session_Pool 00:05:21.012 end mempools------- 00:05:21.012 6 memzones totaling size 4.142822 MiB 00:05:21.012 size: 1.000366 MiB name: RG_ring_0_1274783 00:05:21.012 size: 1.000366 MiB name: RG_ring_1_1274783 00:05:21.012 size: 1.000366 MiB name: RG_ring_4_1274783 00:05:21.012 size: 1.000366 MiB name: RG_ring_5_1274783 00:05:21.012 size: 0.125366 MiB name: RG_ring_2_1274783 00:05:21.012 size: 0.015991 MiB name: RG_ring_3_1274783 00:05:21.012 end memzones------- 00:05:21.012 23:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:21.012 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:21.012 list of free elements. size: 12.519348 MiB 00:05:21.012 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:21.012 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:21.012 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:21.012 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:21.012 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:21.012 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:21.012 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:21.012 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:21.012 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:21.012 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:21.012 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:21.012 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:21.012 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:21.012 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:21.012 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:21.012 list of standard malloc elements. size: 199.218079 MiB 00:05:21.012 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:21.012 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:21.012 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:21.012 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:21.012 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:21.012 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:21.012 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:21.012 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:21.012 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:21.012 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:21.012 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:21.012 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:21.012 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:21.012 list of memzone associated elements. size: 602.262573 MiB 00:05:21.012 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:21.012 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:21.012 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:21.012 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:21.012 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:21.012 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1274783_0 00:05:21.012 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:21.012 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1274783_0 00:05:21.012 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:21.012 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1274783_0 00:05:21.012 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:21.012 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:21.012 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:21.012 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:21.012 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:21.012 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1274783 00:05:21.012 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:21.013 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1274783 00:05:21.013 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:21.013 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1274783 00:05:21.013 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:21.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:21.013 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:21.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:21.013 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:21.013 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:21.013 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:21.013 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:21.013 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:21.013 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1274783 00:05:21.013 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:21.013 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1274783 00:05:21.013 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:21.013 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1274783 00:05:21.013 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:21.013 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1274783 00:05:21.013 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:21.013 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1274783 00:05:21.013 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:21.013 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:21.013 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:21.013 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:21.013 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:21.013 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:21.013 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:21.013 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1274783 00:05:21.013 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:21.013 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:21.013 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:21.013 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:21.013 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:21.013 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1274783 00:05:21.013 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:21.013 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:21.013 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:21.013 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1274783 00:05:21.013 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:21.013 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1274783 00:05:21.013 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:21.013 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:21.013 23:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:21.013 23:05:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1274783 00:05:21.013 23:05:17 -- common/autotest_common.sh@936 -- # '[' -z 1274783 ']' 00:05:21.013 23:05:17 -- common/autotest_common.sh@940 -- # kill -0 1274783 00:05:21.013 23:05:17 -- common/autotest_common.sh@941 -- # uname 00:05:21.013 23:05:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.013 23:05:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1274783 00:05:21.013 23:05:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.013 23:05:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.013 23:05:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1274783' 00:05:21.013 killing process with pid 1274783 00:05:21.013 23:05:17 -- common/autotest_common.sh@955 -- # kill 1274783 00:05:21.013 23:05:17 -- common/autotest_common.sh@960 -- # wait 1274783 00:05:21.273 00:05:21.273 real 0m1.517s 00:05:21.273 user 0m1.561s 00:05:21.273 sys 0m0.468s 00:05:21.273 23:05:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.273 23:05:17 -- common/autotest_common.sh@10 -- # set +x 00:05:21.273 ************************************ 00:05:21.273 END TEST dpdk_mem_utility 00:05:21.273 ************************************ 00:05:21.273 23:05:17 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:21.273 23:05:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.273 23:05:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.273 23:05:17 -- common/autotest_common.sh@10 -- # set +x 00:05:21.273 ************************************ 00:05:21.273 START TEST event 00:05:21.273 ************************************ 00:05:21.273 23:05:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:05:21.533 * Looking for test storage... 00:05:21.533 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:21.533 23:05:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:21.533 23:05:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:21.533 23:05:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:21.533 23:05:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:21.533 23:05:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:21.533 23:05:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:21.533 23:05:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:21.534 23:05:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:21.534 23:05:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:21.534 23:05:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.534 23:05:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:21.534 23:05:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:21.534 23:05:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:21.534 23:05:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:21.534 23:05:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:21.534 23:05:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:21.534 23:05:17 -- scripts/common.sh@344 -- # : 1 00:05:21.534 23:05:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:21.534 23:05:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.534 23:05:17 -- scripts/common.sh@364 -- # decimal 1 00:05:21.534 23:05:17 -- scripts/common.sh@352 -- # local d=1 00:05:21.534 23:05:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.534 23:05:17 -- scripts/common.sh@354 -- # echo 1 00:05:21.534 23:05:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:21.534 23:05:18 -- scripts/common.sh@365 -- # decimal 2 00:05:21.534 23:05:18 -- scripts/common.sh@352 -- # local d=2 00:05:21.534 23:05:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.534 23:05:18 -- scripts/common.sh@354 -- # echo 2 00:05:21.534 23:05:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:21.534 23:05:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:21.534 23:05:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:21.534 23:05:18 -- scripts/common.sh@367 -- # return 0 00:05:21.534 23:05:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.534 23:05:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.534 --rc genhtml_branch_coverage=1 00:05:21.534 --rc genhtml_function_coverage=1 00:05:21.534 --rc genhtml_legend=1 00:05:21.534 --rc geninfo_all_blocks=1 00:05:21.534 --rc geninfo_unexecuted_blocks=1 00:05:21.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.534 ' 00:05:21.534 23:05:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.534 --rc genhtml_branch_coverage=1 00:05:21.534 --rc genhtml_function_coverage=1 00:05:21.534 --rc genhtml_legend=1 00:05:21.534 --rc geninfo_all_blocks=1 00:05:21.534 --rc geninfo_unexecuted_blocks=1 00:05:21.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.534 ' 00:05:21.534 23:05:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.534 --rc genhtml_branch_coverage=1 00:05:21.534 --rc genhtml_function_coverage=1 00:05:21.534 --rc genhtml_legend=1 00:05:21.534 --rc geninfo_all_blocks=1 00:05:21.534 --rc geninfo_unexecuted_blocks=1 00:05:21.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.534 ' 00:05:21.534 23:05:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:21.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.534 --rc genhtml_branch_coverage=1 00:05:21.534 --rc genhtml_function_coverage=1 00:05:21.534 --rc genhtml_legend=1 00:05:21.534 --rc geninfo_all_blocks=1 00:05:21.534 --rc geninfo_unexecuted_blocks=1 00:05:21.534 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.534 ' 00:05:21.534 23:05:18 -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:21.534 23:05:18 -- bdev/nbd_common.sh@6 -- # set -e 00:05:21.534 23:05:18 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.534 23:05:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:21.534 23:05:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.534 23:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:21.534 ************************************ 00:05:21.534 START TEST event_perf 00:05:21.534 ************************************ 00:05:21.534 23:05:18 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:21.534 Running I/O for 1 seconds...[2024-11-17 23:05:18.036853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:21.534 [2024-11-17 23:05:18.036962] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275134 ] 00:05:21.534 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.534 [2024-11-17 23:05:18.107706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.794 [2024-11-17 23:05:18.180574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.794 [2024-11-17 23:05:18.180610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.794 [2024-11-17 23:05:18.180696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.794 [2024-11-17 23:05:18.180708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.738 Running I/O for 1 seconds... 00:05:22.738 lcore 0: 196428 00:05:22.738 lcore 1: 196428 00:05:22.738 lcore 2: 196427 00:05:22.738 lcore 3: 196428 00:05:22.738 done. 00:05:22.738 00:05:22.738 real 0m1.226s 00:05:22.738 user 0m4.132s 00:05:22.738 sys 0m0.090s 00:05:22.738 23:05:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.738 23:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:22.738 ************************************ 00:05:22.738 END TEST event_perf 00:05:22.738 ************************************ 00:05:22.738 23:05:19 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.738 23:05:19 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:22.738 23:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.738 23:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:22.738 ************************************ 00:05:22.738 START TEST event_reactor 00:05:22.738 ************************************ 00:05:22.738 23:05:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:22.738 [2024-11-17 23:05:19.311739] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:22.738 [2024-11-17 23:05:19.311849] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275405 ] 00:05:22.738 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.999 [2024-11-17 23:05:19.382054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.999 [2024-11-17 23:05:19.448569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.939 test_start 00:05:23.939 oneshot 00:05:23.939 tick 100 00:05:23.939 tick 100 00:05:23.939 tick 250 00:05:23.939 tick 100 00:05:23.939 tick 100 00:05:23.939 tick 100 00:05:23.939 tick 250 00:05:23.939 tick 500 00:05:23.939 tick 100 00:05:23.939 tick 100 00:05:23.939 tick 250 00:05:23.939 tick 100 00:05:23.939 tick 100 00:05:23.939 test_end 00:05:23.939 00:05:23.939 real 0m1.223s 00:05:23.939 user 0m1.132s 00:05:23.939 sys 0m0.087s 00:05:23.939 23:05:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:23.939 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:05:23.939 ************************************ 00:05:23.939 END TEST event_reactor 00:05:23.939 ************************************ 00:05:24.199 23:05:20 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.199 23:05:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:24.199 23:05:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.199 23:05:20 -- common/autotest_common.sh@10 -- # set +x 00:05:24.199 ************************************ 00:05:24.199 START TEST event_reactor_perf 00:05:24.199 ************************************ 00:05:24.199 23:05:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:24.199 [2024-11-17 23:05:20.581523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:24.199 [2024-11-17 23:05:20.581647] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275689 ] 00:05:24.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.199 [2024-11-17 23:05:20.652936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.199 [2024-11-17 23:05:20.719552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.579 test_start 00:05:25.579 test_end 00:05:25.579 Performance: 966839 events per second 00:05:25.579 00:05:25.579 real 0m1.221s 00:05:25.579 user 0m1.130s 00:05:25.579 sys 0m0.087s 00:05:25.579 23:05:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:25.579 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.579 ************************************ 00:05:25.579 END TEST event_reactor_perf 00:05:25.579 ************************************ 00:05:25.579 23:05:21 -- event/event.sh@49 -- # uname -s 00:05:25.579 23:05:21 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.579 23:05:21 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.579 23:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.579 23:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.579 23:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:25.579 ************************************ 00:05:25.579 START TEST event_scheduler 00:05:25.579 ************************************ 00:05:25.579 23:05:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:25.579 * Looking for test storage... 00:05:25.579 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:05:25.579 23:05:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:25.579 23:05:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:25.579 23:05:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:25.579 23:05:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:25.579 23:05:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:25.579 23:05:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:25.579 23:05:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:25.579 23:05:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:25.579 23:05:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:25.579 23:05:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.579 23:05:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:25.579 23:05:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:25.580 23:05:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:25.580 23:05:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:25.580 23:05:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:25.580 23:05:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:25.580 23:05:22 -- scripts/common.sh@344 -- # : 1 00:05:25.580 23:05:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:25.580 23:05:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.580 23:05:22 -- scripts/common.sh@364 -- # decimal 1 00:05:25.580 23:05:22 -- scripts/common.sh@352 -- # local d=1 00:05:25.580 23:05:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.580 23:05:22 -- scripts/common.sh@354 -- # echo 1 00:05:25.580 23:05:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:25.580 23:05:22 -- scripts/common.sh@365 -- # decimal 2 00:05:25.580 23:05:22 -- scripts/common.sh@352 -- # local d=2 00:05:25.580 23:05:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.580 23:05:22 -- scripts/common.sh@354 -- # echo 2 00:05:25.580 23:05:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:25.580 23:05:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:25.580 23:05:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:25.580 23:05:22 -- scripts/common.sh@367 -- # return 0 00:05:25.580 23:05:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.580 23:05:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.580 --rc genhtml_branch_coverage=1 00:05:25.580 --rc genhtml_function_coverage=1 00:05:25.580 --rc genhtml_legend=1 00:05:25.580 --rc geninfo_all_blocks=1 00:05:25.580 --rc geninfo_unexecuted_blocks=1 00:05:25.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.580 ' 00:05:25.580 23:05:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.580 --rc genhtml_branch_coverage=1 00:05:25.580 --rc genhtml_function_coverage=1 00:05:25.580 --rc genhtml_legend=1 00:05:25.580 --rc geninfo_all_blocks=1 00:05:25.580 --rc geninfo_unexecuted_blocks=1 00:05:25.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.580 ' 00:05:25.580 23:05:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.580 --rc genhtml_branch_coverage=1 00:05:25.580 --rc genhtml_function_coverage=1 00:05:25.580 --rc genhtml_legend=1 00:05:25.580 --rc geninfo_all_blocks=1 00:05:25.580 --rc geninfo_unexecuted_blocks=1 00:05:25.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.580 ' 00:05:25.580 23:05:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:25.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.580 --rc genhtml_branch_coverage=1 00:05:25.580 --rc genhtml_function_coverage=1 00:05:25.580 --rc genhtml_legend=1 00:05:25.580 --rc geninfo_all_blocks=1 00:05:25.580 --rc geninfo_unexecuted_blocks=1 00:05:25.580 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:25.580 ' 00:05:25.580 23:05:22 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.580 23:05:22 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1276010 00:05:25.580 23:05:22 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.580 23:05:22 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.580 23:05:22 -- scheduler/scheduler.sh@37 -- # waitforlisten 1276010 00:05:25.580 23:05:22 -- common/autotest_common.sh@829 -- # '[' -z 1276010 ']' 00:05:25.580 23:05:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.580 23:05:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.580 23:05:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.580 23:05:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.580 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.580 [2024-11-17 23:05:22.053961] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:25.580 [2024-11-17 23:05:22.054040] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276010 ] 00:05:25.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.580 [2024-11-17 23:05:22.118643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:25.580 [2024-11-17 23:05:22.191221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.580 [2024-11-17 23:05:22.191303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.580 [2024-11-17 23:05:22.191388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:25.580 [2024-11-17 23:05:22.191390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.840 23:05:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.840 23:05:22 -- common/autotest_common.sh@862 -- # return 0 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 POWER: Env isn't set yet! 00:05:25.840 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:25.840 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:25.840 POWER: Cannot set governor of lcore 0 to userspace 00:05:25.840 POWER: Attempting to initialise PSTAT power management... 00:05:25.840 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:25.840 POWER: Initialized successfully for lcore 0 power management 00:05:25.840 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:25.840 POWER: Initialized successfully for lcore 1 power management 00:05:25.840 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:25.840 POWER: Initialized successfully for lcore 2 power management 00:05:25.840 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:25.840 POWER: Initialized successfully for lcore 3 power management 00:05:25.840 [2024-11-17 23:05:22.263696] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:25.840 [2024-11-17 23:05:22.263711] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:25.840 [2024-11-17 23:05:22.263721] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 [2024-11-17 23:05:22.333837] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.840 23:05:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.840 23:05:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 ************************************ 00:05:25.840 START TEST scheduler_create_thread 00:05:25.840 ************************************ 00:05:25.840 23:05:22 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 2 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 3 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 4 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 5 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 6 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 7 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.840 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.840 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.840 8 00:05:25.840 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.840 23:05:22 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.841 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.841 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 9 00:05:25.841 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.841 23:05:22 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.841 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.841 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 10 00:05:25.841 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.841 23:05:22 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.841 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.841 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:25.841 23:05:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.841 23:05:22 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:25.841 23:05:22 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:25.841 23:05:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.841 23:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:26.781 23:05:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.781 23:05:23 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:26.781 23:05:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.781 23:05:23 -- common/autotest_common.sh@10 -- # set +x 00:05:28.162 23:05:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.162 23:05:24 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:28.162 23:05:24 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:28.162 23:05:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.162 23:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 23:05:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.544 00:05:29.544 real 0m3.382s 00:05:29.544 user 0m0.024s 00:05:29.544 sys 0m0.007s 00:05:29.544 23:05:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.544 23:05:25 -- common/autotest_common.sh@10 -- # set +x 00:05:29.544 ************************************ 00:05:29.544 END TEST scheduler_create_thread 00:05:29.544 ************************************ 00:05:29.544 23:05:25 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:29.544 23:05:25 -- scheduler/scheduler.sh@46 -- # killprocess 1276010 00:05:29.544 23:05:25 -- common/autotest_common.sh@936 -- # '[' -z 1276010 ']' 00:05:29.544 23:05:25 -- common/autotest_common.sh@940 -- # kill -0 1276010 00:05:29.544 23:05:25 -- common/autotest_common.sh@941 -- # uname 00:05:29.544 23:05:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:29.544 23:05:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1276010 00:05:29.544 23:05:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:29.544 23:05:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:29.544 23:05:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1276010' 00:05:29.544 killing process with pid 1276010 00:05:29.544 23:05:25 -- common/autotest_common.sh@955 -- # kill 1276010 00:05:29.544 23:05:25 -- common/autotest_common.sh@960 -- # wait 1276010 00:05:29.544 [2024-11-17 23:05:26.105644] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:29.804 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:29.804 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:29.804 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:29.804 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:29.804 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:29.804 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:29.804 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:29.804 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:29.804 00:05:29.804 real 0m4.487s 00:05:29.804 user 0m7.806s 00:05:29.804 sys 0m0.385s 00:05:29.804 23:05:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:29.804 23:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.804 ************************************ 00:05:29.804 END TEST event_scheduler 00:05:29.804 ************************************ 00:05:29.804 23:05:26 -- event/event.sh@51 -- # modprobe -n nbd 00:05:29.804 23:05:26 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:29.804 23:05:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:29.804 23:05:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:29.804 23:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.804 ************************************ 00:05:29.804 START TEST app_repeat 00:05:29.804 ************************************ 00:05:29.804 23:05:26 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:29.804 23:05:26 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.804 23:05:26 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.804 23:05:26 -- event/event.sh@13 -- # local nbd_list 00:05:29.804 23:05:26 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.804 23:05:26 -- event/event.sh@14 -- # local bdev_list 00:05:29.804 23:05:26 -- event/event.sh@15 -- # local repeat_times=4 00:05:29.804 23:05:26 -- event/event.sh@17 -- # modprobe nbd 00:05:29.804 23:05:26 -- event/event.sh@19 -- # repeat_pid=1276868 00:05:29.804 23:05:26 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.804 23:05:26 -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:29.804 23:05:26 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1276868' 00:05:29.804 Process app_repeat pid: 1276868 00:05:29.804 23:05:26 -- event/event.sh@23 -- # for i in {0..2} 00:05:29.804 23:05:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:29.804 spdk_app_start Round 0 00:05:29.804 23:05:26 -- event/event.sh@25 -- # waitforlisten 1276868 /var/tmp/spdk-nbd.sock 00:05:29.804 23:05:26 -- common/autotest_common.sh@829 -- # '[' -z 1276868 ']' 00:05:29.804 23:05:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:29.804 23:05:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.804 23:05:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:29.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:29.804 23:05:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.804 23:05:26 -- common/autotest_common.sh@10 -- # set +x 00:05:29.804 [2024-11-17 23:05:26.410816] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:29.804 [2024-11-17 23:05:26.410905] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276868 ] 00:05:30.065 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.065 [2024-11-17 23:05:26.481607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.065 [2024-11-17 23:05:26.548837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.065 [2024-11-17 23:05:26.548839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.634 23:05:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.634 23:05:27 -- common/autotest_common.sh@862 -- # return 0 00:05:30.634 23:05:27 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.894 Malloc0 00:05:30.894 23:05:27 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.154 Malloc1 00:05:31.154 23:05:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@12 -- # local i 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.154 23:05:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.414 /dev/nbd0 00:05:31.414 23:05:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.414 23:05:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.414 23:05:27 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:31.414 23:05:27 -- common/autotest_common.sh@867 -- # local i 00:05:31.414 23:05:27 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.414 23:05:27 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.414 23:05:27 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:31.414 23:05:27 -- common/autotest_common.sh@871 -- # break 00:05:31.414 23:05:27 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.414 23:05:27 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.414 23:05:27 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.414 1+0 records in 00:05:31.414 1+0 records out 00:05:31.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264497 s, 15.5 MB/s 00:05:31.414 23:05:27 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:31.414 23:05:27 -- common/autotest_common.sh@884 -- # size=4096 00:05:31.414 23:05:27 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:31.414 23:05:27 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.414 23:05:27 -- common/autotest_common.sh@887 -- # return 0 00:05:31.414 23:05:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.414 23:05:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.414 23:05:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.414 /dev/nbd1 00:05:31.414 23:05:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.414 23:05:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.414 23:05:28 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:31.414 23:05:28 -- common/autotest_common.sh@867 -- # local i 00:05:31.414 23:05:28 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:31.414 23:05:28 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:31.414 23:05:28 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:31.414 23:05:28 -- common/autotest_common.sh@871 -- # break 00:05:31.414 23:05:28 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:31.414 23:05:28 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:31.414 23:05:28 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.414 1+0 records in 00:05:31.414 1+0 records out 00:05:31.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024801 s, 16.5 MB/s 00:05:31.414 23:05:28 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:31.674 23:05:28 -- common/autotest_common.sh@884 -- # size=4096 00:05:31.674 23:05:28 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:31.674 23:05:28 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:31.674 23:05:28 -- common/autotest_common.sh@887 -- # return 0 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.674 { 00:05:31.674 "nbd_device": "/dev/nbd0", 00:05:31.674 "bdev_name": "Malloc0" 00:05:31.674 }, 00:05:31.674 { 00:05:31.674 "nbd_device": "/dev/nbd1", 00:05:31.674 "bdev_name": "Malloc1" 00:05:31.674 } 00:05:31.674 ]' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.674 { 00:05:31.674 "nbd_device": "/dev/nbd0", 00:05:31.674 "bdev_name": "Malloc0" 00:05:31.674 }, 00:05:31.674 { 00:05:31.674 "nbd_device": "/dev/nbd1", 00:05:31.674 "bdev_name": "Malloc1" 00:05:31.674 } 00:05:31.674 ]' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.674 /dev/nbd1' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.674 /dev/nbd1' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.674 256+0 records in 00:05:31.674 256+0 records out 00:05:31.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0094353 s, 111 MB/s 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.674 23:05:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.934 256+0 records in 00:05:31.934 256+0 records out 00:05:31.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193358 s, 54.2 MB/s 00:05:31.934 23:05:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.934 23:05:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.934 256+0 records in 00:05:31.934 256+0 records out 00:05:31.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210018 s, 49.9 MB/s 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@51 -- # local i 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.935 23:05:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@41 -- # break 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.241 23:05:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.242 23:05:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.242 23:05:28 -- bdev/nbd_common.sh@41 -- # break 00:05:32.242 23:05:28 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.242 23:05:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.242 23:05:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.242 23:05:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@65 -- # true 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.611 23:05:28 -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.611 23:05:28 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.611 23:05:29 -- event/event.sh@35 -- # sleep 3 00:05:32.870 [2024-11-17 23:05:29.365554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.870 [2024-11-17 23:05:29.428176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.870 [2024-11-17 23:05:29.428177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.870 [2024-11-17 23:05:29.468924] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.870 [2024-11-17 23:05:29.468966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.162 23:05:32 -- event/event.sh@23 -- # for i in {0..2} 00:05:36.162 23:05:32 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:36.162 spdk_app_start Round 1 00:05:36.162 23:05:32 -- event/event.sh@25 -- # waitforlisten 1276868 /var/tmp/spdk-nbd.sock 00:05:36.162 23:05:32 -- common/autotest_common.sh@829 -- # '[' -z 1276868 ']' 00:05:36.162 23:05:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.162 23:05:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.162 23:05:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.162 23:05:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.162 23:05:32 -- common/autotest_common.sh@10 -- # set +x 00:05:36.162 23:05:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.162 23:05:32 -- common/autotest_common.sh@862 -- # return 0 00:05:36.162 23:05:32 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.162 Malloc0 00:05:36.162 23:05:32 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.162 Malloc1 00:05:36.162 23:05:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.162 23:05:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.162 23:05:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.162 23:05:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.162 23:05:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@12 -- # local i 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.163 23:05:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.422 /dev/nbd0 00:05:36.422 23:05:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.422 23:05:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.422 23:05:32 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:36.422 23:05:32 -- common/autotest_common.sh@867 -- # local i 00:05:36.422 23:05:32 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.422 23:05:32 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.422 23:05:32 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:36.422 23:05:32 -- common/autotest_common.sh@871 -- # break 00:05:36.422 23:05:32 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.422 23:05:32 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.422 23:05:32 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.422 1+0 records in 00:05:36.422 1+0 records out 00:05:36.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220159 s, 18.6 MB/s 00:05:36.422 23:05:32 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:36.422 23:05:32 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.422 23:05:32 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:36.422 23:05:32 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.422 23:05:32 -- common/autotest_common.sh@887 -- # return 0 00:05:36.422 23:05:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.422 23:05:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.422 23:05:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.682 /dev/nbd1 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.682 23:05:33 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:36.682 23:05:33 -- common/autotest_common.sh@867 -- # local i 00:05:36.682 23:05:33 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:36.682 23:05:33 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:36.682 23:05:33 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:36.682 23:05:33 -- common/autotest_common.sh@871 -- # break 00:05:36.682 23:05:33 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:36.682 23:05:33 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:36.682 23:05:33 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.682 1+0 records in 00:05:36.682 1+0 records out 00:05:36.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250821 s, 16.3 MB/s 00:05:36.682 23:05:33 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:36.682 23:05:33 -- common/autotest_common.sh@884 -- # size=4096 00:05:36.682 23:05:33 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:36.682 23:05:33 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:36.682 23:05:33 -- common/autotest_common.sh@887 -- # return 0 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.682 23:05:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.941 { 00:05:36.941 "nbd_device": "/dev/nbd0", 00:05:36.941 "bdev_name": "Malloc0" 00:05:36.941 }, 00:05:36.941 { 00:05:36.941 "nbd_device": "/dev/nbd1", 00:05:36.941 "bdev_name": "Malloc1" 00:05:36.941 } 00:05:36.941 ]' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.941 { 00:05:36.941 "nbd_device": "/dev/nbd0", 00:05:36.941 "bdev_name": "Malloc0" 00:05:36.941 }, 00:05:36.941 { 00:05:36.941 "nbd_device": "/dev/nbd1", 00:05:36.941 "bdev_name": "Malloc1" 00:05:36.941 } 00:05:36.941 ]' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.941 /dev/nbd1' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.941 /dev/nbd1' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.941 256+0 records in 00:05:36.941 256+0 records out 00:05:36.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107999 s, 97.1 MB/s 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.941 256+0 records in 00:05:36.941 256+0 records out 00:05:36.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200187 s, 52.4 MB/s 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.941 256+0 records in 00:05:36.941 256+0 records out 00:05:36.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206208 s, 50.9 MB/s 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@51 -- # local i 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.941 23:05:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@41 -- # break 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.200 23:05:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@41 -- # break 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.459 23:05:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.460 23:05:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.460 23:05:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.460 23:05:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@65 -- # true 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.719 23:05:34 -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.719 23:05:34 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.719 23:05:34 -- event/event.sh@35 -- # sleep 3 00:05:37.979 [2024-11-17 23:05:34.483975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.979 [2024-11-17 23:05:34.546323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.979 [2024-11-17 23:05:34.546325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.979 [2024-11-17 23:05:34.586372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.979 [2024-11-17 23:05:34.586416] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:41.272 23:05:37 -- event/event.sh@23 -- # for i in {0..2} 00:05:41.272 23:05:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:41.272 spdk_app_start Round 2 00:05:41.272 23:05:37 -- event/event.sh@25 -- # waitforlisten 1276868 /var/tmp/spdk-nbd.sock 00:05:41.272 23:05:37 -- common/autotest_common.sh@829 -- # '[' -z 1276868 ']' 00:05:41.272 23:05:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:41.272 23:05:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.272 23:05:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:41.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:41.272 23:05:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.272 23:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:41.272 23:05:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.272 23:05:37 -- common/autotest_common.sh@862 -- # return 0 00:05:41.272 23:05:37 -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.272 Malloc0 00:05:41.272 23:05:37 -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.272 Malloc1 00:05:41.272 23:05:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@12 -- # local i 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.272 23:05:37 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.532 /dev/nbd0 00:05:41.532 23:05:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.532 23:05:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.532 23:05:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:41.532 23:05:38 -- common/autotest_common.sh@867 -- # local i 00:05:41.532 23:05:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.532 23:05:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.532 23:05:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:41.532 23:05:38 -- common/autotest_common.sh@871 -- # break 00:05:41.532 23:05:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.532 23:05:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.532 23:05:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.532 1+0 records in 00:05:41.532 1+0 records out 00:05:41.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151126 s, 27.1 MB/s 00:05:41.532 23:05:38 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:41.532 23:05:38 -- common/autotest_common.sh@884 -- # size=4096 00:05:41.532 23:05:38 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:41.532 23:05:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.532 23:05:38 -- common/autotest_common.sh@887 -- # return 0 00:05:41.532 23:05:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.532 23:05:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.532 23:05:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.792 /dev/nbd1 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.792 23:05:38 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:41.792 23:05:38 -- common/autotest_common.sh@867 -- # local i 00:05:41.792 23:05:38 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.792 23:05:38 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.792 23:05:38 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:41.792 23:05:38 -- common/autotest_common.sh@871 -- # break 00:05:41.792 23:05:38 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.792 23:05:38 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.792 23:05:38 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.792 1+0 records in 00:05:41.792 1+0 records out 00:05:41.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256519 s, 16.0 MB/s 00:05:41.792 23:05:38 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:41.792 23:05:38 -- common/autotest_common.sh@884 -- # size=4096 00:05:41.792 23:05:38 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:05:41.792 23:05:38 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.792 23:05:38 -- common/autotest_common.sh@887 -- # return 0 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.792 23:05:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.052 23:05:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.052 { 00:05:42.052 "nbd_device": "/dev/nbd0", 00:05:42.052 "bdev_name": "Malloc0" 00:05:42.052 }, 00:05:42.052 { 00:05:42.052 "nbd_device": "/dev/nbd1", 00:05:42.052 "bdev_name": "Malloc1" 00:05:42.053 } 00:05:42.053 ]' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.053 { 00:05:42.053 "nbd_device": "/dev/nbd0", 00:05:42.053 "bdev_name": "Malloc0" 00:05:42.053 }, 00:05:42.053 { 00:05:42.053 "nbd_device": "/dev/nbd1", 00:05:42.053 "bdev_name": "Malloc1" 00:05:42.053 } 00:05:42.053 ]' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.053 /dev/nbd1' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.053 /dev/nbd1' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.053 256+0 records in 00:05:42.053 256+0 records out 00:05:42.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106323 s, 98.6 MB/s 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.053 256+0 records in 00:05:42.053 256+0 records out 00:05:42.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195585 s, 53.6 MB/s 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.053 256+0 records in 00:05:42.053 256+0 records out 00:05:42.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206744 s, 50.7 MB/s 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@51 -- # local i 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.053 23:05:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@41 -- # break 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.313 23:05:38 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@41 -- # break 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.573 23:05:38 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.573 23:05:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.573 23:05:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.573 23:05:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@65 -- # true 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.832 23:05:39 -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.832 23:05:39 -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.832 23:05:39 -- event/event.sh@35 -- # sleep 3 00:05:43.091 [2024-11-17 23:05:39.589895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.091 [2024-11-17 23:05:39.652993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.091 [2024-11-17 23:05:39.652994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.091 [2024-11-17 23:05:39.693612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.091 [2024-11-17 23:05:39.693657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:46.385 23:05:42 -- event/event.sh@38 -- # waitforlisten 1276868 /var/tmp/spdk-nbd.sock 00:05:46.385 23:05:42 -- common/autotest_common.sh@829 -- # '[' -z 1276868 ']' 00:05:46.385 23:05:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.385 23:05:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.385 23:05:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.385 23:05:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.385 23:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.385 23:05:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.385 23:05:42 -- common/autotest_common.sh@862 -- # return 0 00:05:46.385 23:05:42 -- event/event.sh@39 -- # killprocess 1276868 00:05:46.385 23:05:42 -- common/autotest_common.sh@936 -- # '[' -z 1276868 ']' 00:05:46.385 23:05:42 -- common/autotest_common.sh@940 -- # kill -0 1276868 00:05:46.385 23:05:42 -- common/autotest_common.sh@941 -- # uname 00:05:46.385 23:05:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:46.385 23:05:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1276868 00:05:46.385 23:05:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:46.385 23:05:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:46.385 23:05:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1276868' 00:05:46.385 killing process with pid 1276868 00:05:46.385 23:05:42 -- common/autotest_common.sh@955 -- # kill 1276868 00:05:46.385 23:05:42 -- common/autotest_common.sh@960 -- # wait 1276868 00:05:46.385 spdk_app_start is called in Round 0. 00:05:46.385 Shutdown signal received, stop current app iteration 00:05:46.385 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:46.385 spdk_app_start is called in Round 1. 00:05:46.385 Shutdown signal received, stop current app iteration 00:05:46.385 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:46.385 spdk_app_start is called in Round 2. 00:05:46.385 Shutdown signal received, stop current app iteration 00:05:46.385 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:46.385 spdk_app_start is called in Round 3. 00:05:46.385 Shutdown signal received, stop current app iteration 00:05:46.385 23:05:42 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:46.385 23:05:42 -- event/event.sh@42 -- # return 0 00:05:46.385 00:05:46.385 real 0m16.440s 00:05:46.385 user 0m35.049s 00:05:46.385 sys 0m3.064s 00:05:46.385 23:05:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.385 23:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.385 ************************************ 00:05:46.385 END TEST app_repeat 00:05:46.385 ************************************ 00:05:46.385 23:05:42 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:46.385 23:05:42 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.385 23:05:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.385 23:05:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.385 23:05:42 -- common/autotest_common.sh@10 -- # set +x 00:05:46.385 ************************************ 00:05:46.385 START TEST cpu_locks 00:05:46.385 ************************************ 00:05:46.385 23:05:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:46.385 * Looking for test storage... 00:05:46.385 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:05:46.385 23:05:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.385 23:05:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.385 23:05:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.645 23:05:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.645 23:05:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.645 23:05:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.645 23:05:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.645 23:05:43 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.645 23:05:43 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.645 23:05:43 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.645 23:05:43 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.645 23:05:43 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.645 23:05:43 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.645 23:05:43 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.645 23:05:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.645 23:05:43 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.645 23:05:43 -- scripts/common.sh@344 -- # : 1 00:05:46.645 23:05:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.645 23:05:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.645 23:05:43 -- scripts/common.sh@364 -- # decimal 1 00:05:46.645 23:05:43 -- scripts/common.sh@352 -- # local d=1 00:05:46.645 23:05:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.645 23:05:43 -- scripts/common.sh@354 -- # echo 1 00:05:46.645 23:05:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.645 23:05:43 -- scripts/common.sh@365 -- # decimal 2 00:05:46.645 23:05:43 -- scripts/common.sh@352 -- # local d=2 00:05:46.645 23:05:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.645 23:05:43 -- scripts/common.sh@354 -- # echo 2 00:05:46.645 23:05:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.645 23:05:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.645 23:05:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.645 23:05:43 -- scripts/common.sh@367 -- # return 0 00:05:46.645 23:05:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.645 23:05:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.645 --rc genhtml_branch_coverage=1 00:05:46.645 --rc genhtml_function_coverage=1 00:05:46.645 --rc genhtml_legend=1 00:05:46.645 --rc geninfo_all_blocks=1 00:05:46.645 --rc geninfo_unexecuted_blocks=1 00:05:46.645 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.645 ' 00:05:46.645 23:05:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.645 --rc genhtml_branch_coverage=1 00:05:46.645 --rc genhtml_function_coverage=1 00:05:46.645 --rc genhtml_legend=1 00:05:46.645 --rc geninfo_all_blocks=1 00:05:46.645 --rc geninfo_unexecuted_blocks=1 00:05:46.645 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.645 ' 00:05:46.645 23:05:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.645 --rc genhtml_branch_coverage=1 00:05:46.645 --rc genhtml_function_coverage=1 00:05:46.645 --rc genhtml_legend=1 00:05:46.645 --rc geninfo_all_blocks=1 00:05:46.645 --rc geninfo_unexecuted_blocks=1 00:05:46.645 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.645 ' 00:05:46.645 23:05:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.645 --rc genhtml_branch_coverage=1 00:05:46.645 --rc genhtml_function_coverage=1 00:05:46.645 --rc genhtml_legend=1 00:05:46.645 --rc geninfo_all_blocks=1 00:05:46.645 --rc geninfo_unexecuted_blocks=1 00:05:46.645 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:46.645 ' 00:05:46.645 23:05:43 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:46.645 23:05:43 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:46.645 23:05:43 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:46.645 23:05:43 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:46.645 23:05:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.645 23:05:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.645 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.645 ************************************ 00:05:46.645 START TEST default_locks 00:05:46.645 ************************************ 00:05:46.645 23:05:43 -- common/autotest_common.sh@1114 -- # default_locks 00:05:46.645 23:05:43 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1280076 00:05:46.645 23:05:43 -- event/cpu_locks.sh@47 -- # waitforlisten 1280076 00:05:46.645 23:05:43 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.645 23:05:43 -- common/autotest_common.sh@829 -- # '[' -z 1280076 ']' 00:05:46.645 23:05:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.645 23:05:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.645 23:05:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.645 23:05:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.645 23:05:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.645 [2024-11-17 23:05:43.095611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.645 [2024-11-17 23:05:43.095701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280076 ] 00:05:46.645 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.645 [2024-11-17 23:05:43.163837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.645 [2024-11-17 23:05:43.236712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.645 [2024-11-17 23:05:43.236821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.584 23:05:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.584 23:05:43 -- common/autotest_common.sh@862 -- # return 0 00:05:47.584 23:05:43 -- event/cpu_locks.sh@49 -- # locks_exist 1280076 00:05:47.584 23:05:43 -- event/cpu_locks.sh@22 -- # lslocks -p 1280076 00:05:47.584 23:05:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.153 lslocks: write error 00:05:48.153 23:05:44 -- event/cpu_locks.sh@50 -- # killprocess 1280076 00:05:48.153 23:05:44 -- common/autotest_common.sh@936 -- # '[' -z 1280076 ']' 00:05:48.153 23:05:44 -- common/autotest_common.sh@940 -- # kill -0 1280076 00:05:48.153 23:05:44 -- common/autotest_common.sh@941 -- # uname 00:05:48.153 23:05:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.153 23:05:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1280076 00:05:48.153 23:05:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.153 23:05:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.153 23:05:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1280076' 00:05:48.153 killing process with pid 1280076 00:05:48.153 23:05:44 -- common/autotest_common.sh@955 -- # kill 1280076 00:05:48.153 23:05:44 -- common/autotest_common.sh@960 -- # wait 1280076 00:05:48.413 23:05:44 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1280076 00:05:48.413 23:05:44 -- common/autotest_common.sh@650 -- # local es=0 00:05:48.413 23:05:44 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1280076 00:05:48.413 23:05:44 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.413 23:05:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.413 23:05:44 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.413 23:05:44 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.413 23:05:44 -- common/autotest_common.sh@653 -- # waitforlisten 1280076 00:05:48.413 23:05:44 -- common/autotest_common.sh@829 -- # '[' -z 1280076 ']' 00:05:48.413 23:05:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.413 23:05:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.413 23:05:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.413 23:05:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.413 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1280076) - No such process 00:05:48.413 ERROR: process (pid: 1280076) is no longer running 00:05:48.413 23:05:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.413 23:05:44 -- common/autotest_common.sh@862 -- # return 1 00:05:48.413 23:05:44 -- common/autotest_common.sh@653 -- # es=1 00:05:48.413 23:05:44 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:48.413 23:05:44 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:48.413 23:05:44 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:48.413 23:05:44 -- event/cpu_locks.sh@54 -- # no_locks 00:05:48.413 23:05:44 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.413 23:05:44 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.413 23:05:44 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.413 00:05:48.413 real 0m1.777s 00:05:48.413 user 0m1.878s 00:05:48.413 sys 0m0.634s 00:05:48.413 23:05:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:48.413 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 ************************************ 00:05:48.413 END TEST default_locks 00:05:48.413 ************************************ 00:05:48.413 23:05:44 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:48.413 23:05:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.413 23:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.413 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 ************************************ 00:05:48.413 START TEST default_locks_via_rpc 00:05:48.413 ************************************ 00:05:48.413 23:05:44 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:48.413 23:05:44 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1280378 00:05:48.413 23:05:44 -- event/cpu_locks.sh@63 -- # waitforlisten 1280378 00:05:48.413 23:05:44 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.413 23:05:44 -- common/autotest_common.sh@829 -- # '[' -z 1280378 ']' 00:05:48.413 23:05:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.413 23:05:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.413 23:05:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.413 23:05:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.413 23:05:44 -- common/autotest_common.sh@10 -- # set +x 00:05:48.413 [2024-11-17 23:05:44.919944] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:48.413 [2024-11-17 23:05:44.920013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280378 ] 00:05:48.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.413 [2024-11-17 23:05:44.988011] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.673 [2024-11-17 23:05:45.050797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.673 [2024-11-17 23:05:45.050905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.243 23:05:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.243 23:05:45 -- common/autotest_common.sh@862 -- # return 0 00:05:49.243 23:05:45 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:49.243 23:05:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.243 23:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:49.243 23:05:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.243 23:05:45 -- event/cpu_locks.sh@67 -- # no_locks 00:05:49.243 23:05:45 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.243 23:05:45 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.243 23:05:45 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.243 23:05:45 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:49.243 23:05:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.243 23:05:45 -- common/autotest_common.sh@10 -- # set +x 00:05:49.243 23:05:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.243 23:05:45 -- event/cpu_locks.sh@71 -- # locks_exist 1280378 00:05:49.243 23:05:45 -- event/cpu_locks.sh@22 -- # lslocks -p 1280378 00:05:49.243 23:05:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.814 23:05:46 -- event/cpu_locks.sh@73 -- # killprocess 1280378 00:05:49.814 23:05:46 -- common/autotest_common.sh@936 -- # '[' -z 1280378 ']' 00:05:49.814 23:05:46 -- common/autotest_common.sh@940 -- # kill -0 1280378 00:05:49.814 23:05:46 -- common/autotest_common.sh@941 -- # uname 00:05:49.814 23:05:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.814 23:05:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1280378 00:05:49.814 23:05:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.814 23:05:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.814 23:05:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1280378' 00:05:49.814 killing process with pid 1280378 00:05:49.814 23:05:46 -- common/autotest_common.sh@955 -- # kill 1280378 00:05:49.814 23:05:46 -- common/autotest_common.sh@960 -- # wait 1280378 00:05:50.074 00:05:50.074 real 0m1.589s 00:05:50.074 user 0m1.685s 00:05:50.074 sys 0m0.518s 00:05:50.074 23:05:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:50.074 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:05:50.074 ************************************ 00:05:50.074 END TEST default_locks_via_rpc 00:05:50.074 ************************************ 00:05:50.074 23:05:46 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.074 23:05:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.074 23:05:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.074 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:05:50.074 ************************************ 00:05:50.074 START TEST non_locking_app_on_locked_coremask 00:05:50.074 ************************************ 00:05:50.074 23:05:46 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:50.074 23:05:46 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1280677 00:05:50.074 23:05:46 -- event/cpu_locks.sh@81 -- # waitforlisten 1280677 /var/tmp/spdk.sock 00:05:50.074 23:05:46 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.074 23:05:46 -- common/autotest_common.sh@829 -- # '[' -z 1280677 ']' 00:05:50.074 23:05:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.074 23:05:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.074 23:05:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.074 23:05:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.074 23:05:46 -- common/autotest_common.sh@10 -- # set +x 00:05:50.074 [2024-11-17 23:05:46.559758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.074 [2024-11-17 23:05:46.559825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280677 ] 00:05:50.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.074 [2024-11-17 23:05:46.627189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.334 [2024-11-17 23:05:46.701375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.334 [2024-11-17 23:05:46.701477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.903 23:05:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.903 23:05:47 -- common/autotest_common.sh@862 -- # return 0 00:05:50.903 23:05:47 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.903 23:05:47 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1280940 00:05:50.903 23:05:47 -- event/cpu_locks.sh@85 -- # waitforlisten 1280940 /var/tmp/spdk2.sock 00:05:50.903 23:05:47 -- common/autotest_common.sh@829 -- # '[' -z 1280940 ']' 00:05:50.903 23:05:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.903 23:05:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.903 23:05:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.903 23:05:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.903 23:05:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.903 [2024-11-17 23:05:47.400128] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:50.903 [2024-11-17 23:05:47.400173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280940 ] 00:05:50.903 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.903 [2024-11-17 23:05:47.485839] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.903 [2024-11-17 23:05:47.485861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.163 [2024-11-17 23:05:47.622383] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.163 [2024-11-17 23:05:47.622485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.732 23:05:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.732 23:05:48 -- common/autotest_common.sh@862 -- # return 0 00:05:51.732 23:05:48 -- event/cpu_locks.sh@87 -- # locks_exist 1280677 00:05:51.732 23:05:48 -- event/cpu_locks.sh@22 -- # lslocks -p 1280677 00:05:51.732 23:05:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.112 lslocks: write error 00:05:53.112 23:05:49 -- event/cpu_locks.sh@89 -- # killprocess 1280677 00:05:53.112 23:05:49 -- common/autotest_common.sh@936 -- # '[' -z 1280677 ']' 00:05:53.112 23:05:49 -- common/autotest_common.sh@940 -- # kill -0 1280677 00:05:53.112 23:05:49 -- common/autotest_common.sh@941 -- # uname 00:05:53.112 23:05:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.112 23:05:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1280677 00:05:53.112 23:05:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.112 23:05:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.112 23:05:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1280677' 00:05:53.112 killing process with pid 1280677 00:05:53.112 23:05:49 -- common/autotest_common.sh@955 -- # kill 1280677 00:05:53.112 23:05:49 -- common/autotest_common.sh@960 -- # wait 1280677 00:05:53.681 23:05:50 -- event/cpu_locks.sh@90 -- # killprocess 1280940 00:05:53.681 23:05:50 -- common/autotest_common.sh@936 -- # '[' -z 1280940 ']' 00:05:53.681 23:05:50 -- common/autotest_common.sh@940 -- # kill -0 1280940 00:05:53.681 23:05:50 -- common/autotest_common.sh@941 -- # uname 00:05:53.681 23:05:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:53.681 23:05:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1280940 00:05:53.681 23:05:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:53.681 23:05:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:53.681 23:05:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1280940' 00:05:53.681 killing process with pid 1280940 00:05:53.681 23:05:50 -- common/autotest_common.sh@955 -- # kill 1280940 00:05:53.681 23:05:50 -- common/autotest_common.sh@960 -- # wait 1280940 00:05:54.250 00:05:54.250 real 0m4.021s 00:05:54.250 user 0m4.328s 00:05:54.250 sys 0m1.300s 00:05:54.250 23:05:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:54.250 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:54.250 ************************************ 00:05:54.250 END TEST non_locking_app_on_locked_coremask 00:05:54.250 ************************************ 00:05:54.250 23:05:50 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.250 23:05:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.250 23:05:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.250 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:54.250 ************************************ 00:05:54.250 START TEST locking_app_on_unlocked_coremask 00:05:54.250 ************************************ 00:05:54.250 23:05:50 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:54.250 23:05:50 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1281517 00:05:54.250 23:05:50 -- event/cpu_locks.sh@99 -- # waitforlisten 1281517 /var/tmp/spdk.sock 00:05:54.250 23:05:50 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.250 23:05:50 -- common/autotest_common.sh@829 -- # '[' -z 1281517 ']' 00:05:54.250 23:05:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.250 23:05:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.250 23:05:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.250 23:05:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.250 23:05:50 -- common/autotest_common.sh@10 -- # set +x 00:05:54.250 [2024-11-17 23:05:50.629771] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.250 [2024-11-17 23:05:50.629857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281517 ] 00:05:54.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.250 [2024-11-17 23:05:50.697447] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.250 [2024-11-17 23:05:50.697472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.250 [2024-11-17 23:05:50.771772] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.250 [2024-11-17 23:05:50.771876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.189 23:05:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.189 23:05:51 -- common/autotest_common.sh@862 -- # return 0 00:05:55.189 23:05:51 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1281621 00:05:55.189 23:05:51 -- event/cpu_locks.sh@103 -- # waitforlisten 1281621 /var/tmp/spdk2.sock 00:05:55.189 23:05:51 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.189 23:05:51 -- common/autotest_common.sh@829 -- # '[' -z 1281621 ']' 00:05:55.189 23:05:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.189 23:05:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.189 23:05:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.189 23:05:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.189 23:05:51 -- common/autotest_common.sh@10 -- # set +x 00:05:55.189 [2024-11-17 23:05:51.483371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:55.189 [2024-11-17 23:05:51.483441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281621 ] 00:05:55.189 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.189 [2024-11-17 23:05:51.576170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.189 [2024-11-17 23:05:51.720822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.189 [2024-11-17 23:05:51.720922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.758 23:05:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.758 23:05:52 -- common/autotest_common.sh@862 -- # return 0 00:05:55.758 23:05:52 -- event/cpu_locks.sh@105 -- # locks_exist 1281621 00:05:55.758 23:05:52 -- event/cpu_locks.sh@22 -- # lslocks -p 1281621 00:05:55.758 23:05:52 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.697 lslocks: write error 00:05:56.697 23:05:53 -- event/cpu_locks.sh@107 -- # killprocess 1281517 00:05:56.697 23:05:53 -- common/autotest_common.sh@936 -- # '[' -z 1281517 ']' 00:05:56.697 23:05:53 -- common/autotest_common.sh@940 -- # kill -0 1281517 00:05:56.697 23:05:53 -- common/autotest_common.sh@941 -- # uname 00:05:56.697 23:05:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.697 23:05:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1281517 00:05:56.697 23:05:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.697 23:05:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.697 23:05:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1281517' 00:05:56.697 killing process with pid 1281517 00:05:56.697 23:05:53 -- common/autotest_common.sh@955 -- # kill 1281517 00:05:56.697 23:05:53 -- common/autotest_common.sh@960 -- # wait 1281517 00:05:57.266 23:05:53 -- event/cpu_locks.sh@108 -- # killprocess 1281621 00:05:57.266 23:05:53 -- common/autotest_common.sh@936 -- # '[' -z 1281621 ']' 00:05:57.266 23:05:53 -- common/autotest_common.sh@940 -- # kill -0 1281621 00:05:57.266 23:05:53 -- common/autotest_common.sh@941 -- # uname 00:05:57.266 23:05:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.266 23:05:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1281621 00:05:57.266 23:05:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.266 23:05:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.266 23:05:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1281621' 00:05:57.266 killing process with pid 1281621 00:05:57.266 23:05:53 -- common/autotest_common.sh@955 -- # kill 1281621 00:05:57.266 23:05:53 -- common/autotest_common.sh@960 -- # wait 1281621 00:05:57.525 00:05:57.525 real 0m3.482s 00:05:57.525 user 0m3.761s 00:05:57.525 sys 0m1.135s 00:05:57.525 23:05:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.526 23:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.526 ************************************ 00:05:57.526 END TEST locking_app_on_unlocked_coremask 00:05:57.526 ************************************ 00:05:57.526 23:05:54 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:57.526 23:05:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.526 23:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.526 23:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.526 ************************************ 00:05:57.526 START TEST locking_app_on_locked_coremask 00:05:57.526 ************************************ 00:05:57.526 23:05:54 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:57.526 23:05:54 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1282102 00:05:57.786 23:05:54 -- event/cpu_locks.sh@116 -- # waitforlisten 1282102 /var/tmp/spdk.sock 00:05:57.786 23:05:54 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.786 23:05:54 -- common/autotest_common.sh@829 -- # '[' -z 1282102 ']' 00:05:57.786 23:05:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.786 23:05:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.786 23:05:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.786 23:05:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.786 23:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:57.786 [2024-11-17 23:05:54.163622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.786 [2024-11-17 23:05:54.163730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282102 ] 00:05:57.786 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.786 [2024-11-17 23:05:54.230541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.786 [2024-11-17 23:05:54.303965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.786 [2024-11-17 23:05:54.304073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.724 23:05:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.724 23:05:54 -- common/autotest_common.sh@862 -- # return 0 00:05:58.724 23:05:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1282370 00:05:58.724 23:05:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1282370 /var/tmp/spdk2.sock 00:05:58.724 23:05:54 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.724 23:05:54 -- common/autotest_common.sh@650 -- # local es=0 00:05:58.724 23:05:54 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1282370 /var/tmp/spdk2.sock 00:05:58.724 23:05:54 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:58.724 23:05:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.724 23:05:54 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:58.724 23:05:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:58.724 23:05:54 -- common/autotest_common.sh@653 -- # waitforlisten 1282370 /var/tmp/spdk2.sock 00:05:58.724 23:05:54 -- common/autotest_common.sh@829 -- # '[' -z 1282370 ']' 00:05:58.724 23:05:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.724 23:05:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.725 23:05:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.725 23:05:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.725 23:05:54 -- common/autotest_common.sh@10 -- # set +x 00:05:58.725 [2024-11-17 23:05:55.016264] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:58.725 [2024-11-17 23:05:55.016326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282370 ] 00:05:58.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.725 [2024-11-17 23:05:55.106555] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1282102 has claimed it. 00:05:58.725 [2024-11-17 23:05:55.106597] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.293 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1282370) - No such process 00:05:59.293 ERROR: process (pid: 1282370) is no longer running 00:05:59.293 23:05:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.293 23:05:55 -- common/autotest_common.sh@862 -- # return 1 00:05:59.293 23:05:55 -- common/autotest_common.sh@653 -- # es=1 00:05:59.293 23:05:55 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.293 23:05:55 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.294 23:05:55 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.294 23:05:55 -- event/cpu_locks.sh@122 -- # locks_exist 1282102 00:05:59.294 23:05:55 -- event/cpu_locks.sh@22 -- # lslocks -p 1282102 00:05:59.294 23:05:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.863 lslocks: write error 00:05:59.863 23:05:56 -- event/cpu_locks.sh@124 -- # killprocess 1282102 00:05:59.863 23:05:56 -- common/autotest_common.sh@936 -- # '[' -z 1282102 ']' 00:05:59.863 23:05:56 -- common/autotest_common.sh@940 -- # kill -0 1282102 00:05:59.863 23:05:56 -- common/autotest_common.sh@941 -- # uname 00:05:59.863 23:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.863 23:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1282102 00:05:59.863 23:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.863 23:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.863 23:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1282102' 00:05:59.863 killing process with pid 1282102 00:05:59.863 23:05:56 -- common/autotest_common.sh@955 -- # kill 1282102 00:05:59.863 23:05:56 -- common/autotest_common.sh@960 -- # wait 1282102 00:06:00.123 00:06:00.123 real 0m2.427s 00:06:00.123 user 0m2.666s 00:06:00.123 sys 0m0.736s 00:06:00.123 23:05:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:00.123 23:05:56 -- common/autotest_common.sh@10 -- # set +x 00:06:00.123 ************************************ 00:06:00.123 END TEST locking_app_on_locked_coremask 00:06:00.123 ************************************ 00:06:00.123 23:05:56 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:00.123 23:05:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.123 23:05:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.123 23:05:56 -- common/autotest_common.sh@10 -- # set +x 00:06:00.123 ************************************ 00:06:00.123 START TEST locking_overlapped_coremask 00:06:00.123 ************************************ 00:06:00.123 23:05:56 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:06:00.123 23:05:56 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1282666 00:06:00.123 23:05:56 -- event/cpu_locks.sh@133 -- # waitforlisten 1282666 /var/tmp/spdk.sock 00:06:00.123 23:05:56 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:00.123 23:05:56 -- common/autotest_common.sh@829 -- # '[' -z 1282666 ']' 00:06:00.123 23:05:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.123 23:05:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.123 23:05:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.123 23:05:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.123 23:05:56 -- common/autotest_common.sh@10 -- # set +x 00:06:00.123 [2024-11-17 23:05:56.636683] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.123 [2024-11-17 23:05:56.636751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282666 ] 00:06:00.123 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.123 [2024-11-17 23:05:56.705282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.383 [2024-11-17 23:05:56.780129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.383 [2024-11-17 23:05:56.780262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.383 [2024-11-17 23:05:56.780359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.383 [2024-11-17 23:05:56.780361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.951 23:05:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.951 23:05:57 -- common/autotest_common.sh@862 -- # return 0 00:06:00.951 23:05:57 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1282737 00:06:00.951 23:05:57 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1282737 /var/tmp/spdk2.sock 00:06:00.951 23:05:57 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:00.951 23:05:57 -- common/autotest_common.sh@650 -- # local es=0 00:06:00.951 23:05:57 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1282737 /var/tmp/spdk2.sock 00:06:00.951 23:05:57 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:00.951 23:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.951 23:05:57 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:00.951 23:05:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:00.951 23:05:57 -- common/autotest_common.sh@653 -- # waitforlisten 1282737 /var/tmp/spdk2.sock 00:06:00.951 23:05:57 -- common/autotest_common.sh@829 -- # '[' -z 1282737 ']' 00:06:00.951 23:05:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.951 23:05:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.951 23:05:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.951 23:05:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.951 23:05:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.951 [2024-11-17 23:05:57.508506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:00.951 [2024-11-17 23:05:57.508577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282737 ] 00:06:00.951 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.243 [2024-11-17 23:05:57.603672] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1282666 has claimed it. 00:06:01.243 [2024-11-17 23:05:57.603712] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.813 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1282737) - No such process 00:06:01.813 ERROR: process (pid: 1282737) is no longer running 00:06:01.813 23:05:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.813 23:05:58 -- common/autotest_common.sh@862 -- # return 1 00:06:01.813 23:05:58 -- common/autotest_common.sh@653 -- # es=1 00:06:01.813 23:05:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.813 23:05:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.813 23:05:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.813 23:05:58 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:01.813 23:05:58 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.813 23:05:58 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.813 23:05:58 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.813 23:05:58 -- event/cpu_locks.sh@141 -- # killprocess 1282666 00:06:01.813 23:05:58 -- common/autotest_common.sh@936 -- # '[' -z 1282666 ']' 00:06:01.813 23:05:58 -- common/autotest_common.sh@940 -- # kill -0 1282666 00:06:01.813 23:05:58 -- common/autotest_common.sh@941 -- # uname 00:06:01.813 23:05:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.813 23:05:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1282666 00:06:01.813 23:05:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.813 23:05:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.813 23:05:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1282666' 00:06:01.813 killing process with pid 1282666 00:06:01.813 23:05:58 -- common/autotest_common.sh@955 -- # kill 1282666 00:06:01.813 23:05:58 -- common/autotest_common.sh@960 -- # wait 1282666 00:06:02.073 00:06:02.073 real 0m1.932s 00:06:02.073 user 0m5.497s 00:06:02.073 sys 0m0.455s 00:06:02.073 23:05:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.073 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:02.073 ************************************ 00:06:02.073 END TEST locking_overlapped_coremask 00:06:02.073 ************************************ 00:06:02.073 23:05:58 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:02.073 23:05:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.073 23:05:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.073 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:02.073 ************************************ 00:06:02.073 START TEST locking_overlapped_coremask_via_rpc 00:06:02.073 ************************************ 00:06:02.073 23:05:58 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:06:02.073 23:05:58 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1282984 00:06:02.073 23:05:58 -- event/cpu_locks.sh@149 -- # waitforlisten 1282984 /var/tmp/spdk.sock 00:06:02.073 23:05:58 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:02.073 23:05:58 -- common/autotest_common.sh@829 -- # '[' -z 1282984 ']' 00:06:02.073 23:05:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.073 23:05:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.073 23:05:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.073 23:05:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.073 23:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:02.073 [2024-11-17 23:05:58.620860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.073 [2024-11-17 23:05:58.620929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282984 ] 00:06:02.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.333 [2024-11-17 23:05:58.687096] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.333 [2024-11-17 23:05:58.687131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.333 [2024-11-17 23:05:58.750799] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.333 [2024-11-17 23:05:58.750936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.333 [2024-11-17 23:05:58.751033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.333 [2024-11-17 23:05:58.751035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.901 23:05:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.901 23:05:59 -- common/autotest_common.sh@862 -- # return 0 00:06:02.901 23:05:59 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1283254 00:06:02.901 23:05:59 -- event/cpu_locks.sh@153 -- # waitforlisten 1283254 /var/tmp/spdk2.sock 00:06:02.901 23:05:59 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:02.901 23:05:59 -- common/autotest_common.sh@829 -- # '[' -z 1283254 ']' 00:06:02.901 23:05:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.901 23:05:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.901 23:05:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.901 23:05:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.901 23:05:59 -- common/autotest_common.sh@10 -- # set +x 00:06:02.901 [2024-11-17 23:05:59.473758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:02.901 [2024-11-17 23:05:59.473820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283254 ] 00:06:02.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.160 [2024-11-17 23:05:59.566226] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.160 [2024-11-17 23:05:59.566260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.160 [2024-11-17 23:05:59.704357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.160 [2024-11-17 23:05:59.704499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.160 [2024-11-17 23:05:59.707579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.160 [2024-11-17 23:05:59.707580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:03.729 23:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.729 23:06:00 -- common/autotest_common.sh@862 -- # return 0 00:06:03.729 23:06:00 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.729 23:06:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.729 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.729 23:06:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.729 23:06:00 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.729 23:06:00 -- common/autotest_common.sh@650 -- # local es=0 00:06:03.729 23:06:00 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.729 23:06:00 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:03.729 23:06:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.729 23:06:00 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:03.729 23:06:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.729 23:06:00 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:03.729 23:06:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.729 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.729 [2024-11-17 23:06:00.328597] app.c: 666:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1282984 has claimed it. 00:06:03.729 request: 00:06:03.729 { 00:06:03.729 "method": "framework_enable_cpumask_locks", 00:06:03.729 "req_id": 1 00:06:03.729 } 00:06:03.729 Got JSON-RPC error response 00:06:03.729 response: 00:06:03.729 { 00:06:03.729 "code": -32603, 00:06:03.729 "message": "Failed to claim CPU core: 2" 00:06:03.729 } 00:06:03.729 23:06:00 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:03.729 23:06:00 -- common/autotest_common.sh@653 -- # es=1 00:06:03.729 23:06:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.729 23:06:00 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.729 23:06:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.729 23:06:00 -- event/cpu_locks.sh@158 -- # waitforlisten 1282984 /var/tmp/spdk.sock 00:06:03.729 23:06:00 -- common/autotest_common.sh@829 -- # '[' -z 1282984 ']' 00:06:03.729 23:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.988 23:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.988 23:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.988 23:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.988 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:03.988 23:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.988 23:06:00 -- common/autotest_common.sh@862 -- # return 0 00:06:03.988 23:06:00 -- event/cpu_locks.sh@159 -- # waitforlisten 1283254 /var/tmp/spdk2.sock 00:06:03.988 23:06:00 -- common/autotest_common.sh@829 -- # '[' -z 1283254 ']' 00:06:03.988 23:06:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.988 23:06:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.988 23:06:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.988 23:06:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.988 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:04.247 23:06:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.247 23:06:00 -- common/autotest_common.sh@862 -- # return 0 00:06:04.247 23:06:00 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.247 23:06:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.247 23:06:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.247 23:06:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.247 00:06:04.247 real 0m2.134s 00:06:04.247 user 0m0.886s 00:06:04.247 sys 0m0.179s 00:06:04.247 23:06:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.247 23:06:00 -- common/autotest_common.sh@10 -- # set +x 00:06:04.247 ************************************ 00:06:04.247 END TEST locking_overlapped_coremask_via_rpc 00:06:04.247 ************************************ 00:06:04.247 23:06:00 -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.247 23:06:00 -- event/cpu_locks.sh@15 -- # [[ -z 1282984 ]] 00:06:04.247 23:06:00 -- event/cpu_locks.sh@15 -- # killprocess 1282984 00:06:04.247 23:06:00 -- common/autotest_common.sh@936 -- # '[' -z 1282984 ']' 00:06:04.247 23:06:00 -- common/autotest_common.sh@940 -- # kill -0 1282984 00:06:04.247 23:06:00 -- common/autotest_common.sh@941 -- # uname 00:06:04.247 23:06:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.247 23:06:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1282984 00:06:04.247 23:06:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.247 23:06:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.247 23:06:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1282984' 00:06:04.247 killing process with pid 1282984 00:06:04.247 23:06:00 -- common/autotest_common.sh@955 -- # kill 1282984 00:06:04.247 23:06:00 -- common/autotest_common.sh@960 -- # wait 1282984 00:06:04.816 23:06:01 -- event/cpu_locks.sh@16 -- # [[ -z 1283254 ]] 00:06:04.816 23:06:01 -- event/cpu_locks.sh@16 -- # killprocess 1283254 00:06:04.816 23:06:01 -- common/autotest_common.sh@936 -- # '[' -z 1283254 ']' 00:06:04.816 23:06:01 -- common/autotest_common.sh@940 -- # kill -0 1283254 00:06:04.816 23:06:01 -- common/autotest_common.sh@941 -- # uname 00:06:04.816 23:06:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.816 23:06:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1283254 00:06:04.816 23:06:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:04.816 23:06:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:04.816 23:06:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1283254' 00:06:04.816 killing process with pid 1283254 00:06:04.816 23:06:01 -- common/autotest_common.sh@955 -- # kill 1283254 00:06:04.816 23:06:01 -- common/autotest_common.sh@960 -- # wait 1283254 00:06:05.075 23:06:01 -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.075 23:06:01 -- event/cpu_locks.sh@1 -- # cleanup 00:06:05.075 23:06:01 -- event/cpu_locks.sh@15 -- # [[ -z 1282984 ]] 00:06:05.075 23:06:01 -- event/cpu_locks.sh@15 -- # killprocess 1282984 00:06:05.075 23:06:01 -- common/autotest_common.sh@936 -- # '[' -z 1282984 ']' 00:06:05.075 23:06:01 -- common/autotest_common.sh@940 -- # kill -0 1282984 00:06:05.075 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1282984) - No such process 00:06:05.075 23:06:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1282984 is not found' 00:06:05.075 Process with pid 1282984 is not found 00:06:05.075 23:06:01 -- event/cpu_locks.sh@16 -- # [[ -z 1283254 ]] 00:06:05.075 23:06:01 -- event/cpu_locks.sh@16 -- # killprocess 1283254 00:06:05.075 23:06:01 -- common/autotest_common.sh@936 -- # '[' -z 1283254 ']' 00:06:05.075 23:06:01 -- common/autotest_common.sh@940 -- # kill -0 1283254 00:06:05.075 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1283254) - No such process 00:06:05.075 23:06:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1283254 is not found' 00:06:05.075 Process with pid 1283254 is not found 00:06:05.075 23:06:01 -- event/cpu_locks.sh@18 -- # rm -f 00:06:05.075 00:06:05.075 real 0m18.666s 00:06:05.075 user 0m31.657s 00:06:05.075 sys 0m5.915s 00:06:05.075 23:06:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.075 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.075 ************************************ 00:06:05.075 END TEST cpu_locks 00:06:05.075 ************************************ 00:06:05.075 00:06:05.075 real 0m43.754s 00:06:05.075 user 1m21.088s 00:06:05.075 sys 0m9.998s 00:06:05.075 23:06:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.075 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.075 ************************************ 00:06:05.075 END TEST event 00:06:05.075 ************************************ 00:06:05.075 23:06:01 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:05.075 23:06:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.075 23:06:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.075 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.075 ************************************ 00:06:05.075 START TEST thread 00:06:05.075 ************************************ 00:06:05.075 23:06:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:06:05.336 * Looking for test storage... 00:06:05.336 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:06:05.336 23:06:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:05.336 23:06:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:05.336 23:06:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:05.336 23:06:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:05.336 23:06:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:05.336 23:06:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:05.336 23:06:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:05.336 23:06:01 -- scripts/common.sh@335 -- # IFS=.-: 00:06:05.336 23:06:01 -- scripts/common.sh@335 -- # read -ra ver1 00:06:05.336 23:06:01 -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.336 23:06:01 -- scripts/common.sh@336 -- # read -ra ver2 00:06:05.336 23:06:01 -- scripts/common.sh@337 -- # local 'op=<' 00:06:05.336 23:06:01 -- scripts/common.sh@339 -- # ver1_l=2 00:06:05.336 23:06:01 -- scripts/common.sh@340 -- # ver2_l=1 00:06:05.336 23:06:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:05.336 23:06:01 -- scripts/common.sh@343 -- # case "$op" in 00:06:05.336 23:06:01 -- scripts/common.sh@344 -- # : 1 00:06:05.336 23:06:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:05.336 23:06:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.336 23:06:01 -- scripts/common.sh@364 -- # decimal 1 00:06:05.336 23:06:01 -- scripts/common.sh@352 -- # local d=1 00:06:05.336 23:06:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.336 23:06:01 -- scripts/common.sh@354 -- # echo 1 00:06:05.336 23:06:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:05.336 23:06:01 -- scripts/common.sh@365 -- # decimal 2 00:06:05.336 23:06:01 -- scripts/common.sh@352 -- # local d=2 00:06:05.336 23:06:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.336 23:06:01 -- scripts/common.sh@354 -- # echo 2 00:06:05.336 23:06:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:05.336 23:06:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:05.336 23:06:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:05.336 23:06:01 -- scripts/common.sh@367 -- # return 0 00:06:05.336 23:06:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.336 23:06:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:05.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.336 --rc genhtml_branch_coverage=1 00:06:05.336 --rc genhtml_function_coverage=1 00:06:05.336 --rc genhtml_legend=1 00:06:05.336 --rc geninfo_all_blocks=1 00:06:05.336 --rc geninfo_unexecuted_blocks=1 00:06:05.336 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.336 ' 00:06:05.336 23:06:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:05.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.336 --rc genhtml_branch_coverage=1 00:06:05.336 --rc genhtml_function_coverage=1 00:06:05.336 --rc genhtml_legend=1 00:06:05.336 --rc geninfo_all_blocks=1 00:06:05.336 --rc geninfo_unexecuted_blocks=1 00:06:05.336 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.336 ' 00:06:05.336 23:06:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:05.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.336 --rc genhtml_branch_coverage=1 00:06:05.336 --rc genhtml_function_coverage=1 00:06:05.336 --rc genhtml_legend=1 00:06:05.336 --rc geninfo_all_blocks=1 00:06:05.336 --rc geninfo_unexecuted_blocks=1 00:06:05.336 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.336 ' 00:06:05.336 23:06:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:05.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.336 --rc genhtml_branch_coverage=1 00:06:05.336 --rc genhtml_function_coverage=1 00:06:05.336 --rc genhtml_legend=1 00:06:05.336 --rc geninfo_all_blocks=1 00:06:05.336 --rc geninfo_unexecuted_blocks=1 00:06:05.336 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:05.336 ' 00:06:05.336 23:06:01 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.336 23:06:01 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:05.336 23:06:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.336 23:06:01 -- common/autotest_common.sh@10 -- # set +x 00:06:05.336 ************************************ 00:06:05.336 START TEST thread_poller_perf 00:06:05.336 ************************************ 00:06:05.336 23:06:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:05.336 [2024-11-17 23:06:01.838851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:05.336 [2024-11-17 23:06:01.838943] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283761 ] 00:06:05.336 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.336 [2024-11-17 23:06:01.909684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.596 [2024-11-17 23:06:01.981663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.596 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.531 [2024-11-17T22:06:03.145Z] ====================================== 00:06:06.531 [2024-11-17T22:06:03.145Z] busy:2506640484 (cyc) 00:06:06.531 [2024-11-17T22:06:03.145Z] total_run_count: 773000 00:06:06.531 [2024-11-17T22:06:03.145Z] tsc_hz: 2500000000 (cyc) 00:06:06.531 [2024-11-17T22:06:03.145Z] ====================================== 00:06:06.531 [2024-11-17T22:06:03.145Z] poller_cost: 3242 (cyc), 1296 (nsec) 00:06:06.531 00:06:06.531 real 0m1.227s 00:06:06.531 user 0m1.139s 00:06:06.531 sys 0m0.084s 00:06:06.531 23:06:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.531 23:06:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.531 ************************************ 00:06:06.531 END TEST thread_poller_perf 00:06:06.531 ************************************ 00:06:06.531 23:06:03 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.531 23:06:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:06.531 23:06:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.531 23:06:03 -- common/autotest_common.sh@10 -- # set +x 00:06:06.531 ************************************ 00:06:06.531 START TEST thread_poller_perf 00:06:06.531 ************************************ 00:06:06.531 23:06:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.531 [2024-11-17 23:06:03.118523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.531 [2024-11-17 23:06:03.118618] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284043 ] 00:06:06.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.790 [2024-11-17 23:06:03.190444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.790 [2024-11-17 23:06:03.259141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.790 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.727 [2024-11-17T22:06:04.341Z] ====================================== 00:06:07.727 [2024-11-17T22:06:04.341Z] busy:2502032130 (cyc) 00:06:07.727 [2024-11-17T22:06:04.341Z] total_run_count: 12850000 00:06:07.727 [2024-11-17T22:06:04.341Z] tsc_hz: 2500000000 (cyc) 00:06:07.727 [2024-11-17T22:06:04.341Z] ====================================== 00:06:07.727 [2024-11-17T22:06:04.341Z] poller_cost: 194 (cyc), 77 (nsec) 00:06:07.727 00:06:07.727 real 0m1.227s 00:06:07.727 user 0m1.138s 00:06:07.727 sys 0m0.085s 00:06:07.727 23:06:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:07.727 23:06:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.727 ************************************ 00:06:07.727 END TEST thread_poller_perf 00:06:07.727 ************************************ 00:06:07.989 23:06:04 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:06:07.989 23:06:04 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:07.989 23:06:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.989 23:06:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.989 23:06:04 -- common/autotest_common.sh@10 -- # set +x 00:06:07.989 ************************************ 00:06:07.989 START TEST thread_spdk_lock 00:06:07.989 ************************************ 00:06:07.989 23:06:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:06:07.989 [2024-11-17 23:06:04.394666] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:07.989 [2024-11-17 23:06:04.394753] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284519 ] 00:06:07.989 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.989 [2024-11-17 23:06:04.464841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.989 [2024-11-17 23:06:04.535555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.989 [2024-11-17 23:06:04.535557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.558 [2024-11-17 23:06:05.021092] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:08.558 [2024-11-17 23:06:05.021133] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:08.558 [2024-11-17 23:06:05.021144] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x1483c80 00:06:08.558 [2024-11-17 23:06:05.022064] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:08.558 [2024-11-17 23:06:05.022168] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:08.558 [2024-11-17 23:06:05.022188] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:06:08.558 Starting test contend 00:06:08.558 Worker Delay Wait us Hold us Total us 00:06:08.558 0 3 169121 183566 352687 00:06:08.558 1 5 88397 284509 372907 00:06:08.558 PASS test contend 00:06:08.558 Starting test hold_by_poller 00:06:08.559 PASS test hold_by_poller 00:06:08.559 Starting test hold_by_message 00:06:08.559 PASS test hold_by_message 00:06:08.559 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:06:08.559 100014 assertions passed 00:06:08.559 0 assertions failed 00:06:08.559 00:06:08.559 real 0m0.706s 00:06:08.559 user 0m1.100s 00:06:08.559 sys 0m0.090s 00:06:08.559 23:06:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.559 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.559 ************************************ 00:06:08.559 END TEST thread_spdk_lock 00:06:08.559 ************************************ 00:06:08.559 00:06:08.559 real 0m3.497s 00:06:08.559 user 0m3.535s 00:06:08.559 sys 0m0.477s 00:06:08.559 23:06:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.559 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.559 ************************************ 00:06:08.559 END TEST thread 00:06:08.559 ************************************ 00:06:08.559 23:06:05 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:08.559 23:06:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.559 23:06:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.559 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.819 ************************************ 00:06:08.819 START TEST accel 00:06:08.819 ************************************ 00:06:08.819 23:06:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel.sh 00:06:08.819 * Looking for test storage... 00:06:08.819 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:06:08.819 23:06:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.819 23:06:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.819 23:06:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.819 23:06:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.819 23:06:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.819 23:06:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.819 23:06:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.819 23:06:05 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.819 23:06:05 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.819 23:06:05 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.819 23:06:05 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.819 23:06:05 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.819 23:06:05 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.819 23:06:05 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.819 23:06:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.819 23:06:05 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.819 23:06:05 -- scripts/common.sh@344 -- # : 1 00:06:08.819 23:06:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.819 23:06:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.819 23:06:05 -- scripts/common.sh@364 -- # decimal 1 00:06:08.819 23:06:05 -- scripts/common.sh@352 -- # local d=1 00:06:08.819 23:06:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.819 23:06:05 -- scripts/common.sh@354 -- # echo 1 00:06:08.819 23:06:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.819 23:06:05 -- scripts/common.sh@365 -- # decimal 2 00:06:08.819 23:06:05 -- scripts/common.sh@352 -- # local d=2 00:06:08.819 23:06:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.819 23:06:05 -- scripts/common.sh@354 -- # echo 2 00:06:08.819 23:06:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.819 23:06:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.819 23:06:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.819 23:06:05 -- scripts/common.sh@367 -- # return 0 00:06:08.819 23:06:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.819 23:06:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.819 ' 00:06:08.819 23:06:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.819 ' 00:06:08.819 23:06:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.819 ' 00:06:08.819 23:06:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.819 --rc genhtml_branch_coverage=1 00:06:08.819 --rc genhtml_function_coverage=1 00:06:08.819 --rc genhtml_legend=1 00:06:08.819 --rc geninfo_all_blocks=1 00:06:08.819 --rc geninfo_unexecuted_blocks=1 00:06:08.819 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:08.819 ' 00:06:08.819 23:06:05 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:08.819 23:06:05 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:08.819 23:06:05 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.819 23:06:05 -- accel/accel.sh@59 -- # spdk_tgt_pid=1284951 00:06:08.819 23:06:05 -- accel/accel.sh@60 -- # waitforlisten 1284951 00:06:08.819 23:06:05 -- common/autotest_common.sh@829 -- # '[' -z 1284951 ']' 00:06:08.819 23:06:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.819 23:06:05 -- accel/accel.sh@58 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:08.819 23:06:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.819 23:06:05 -- accel/accel.sh@58 -- # build_accel_config 00:06:08.819 23:06:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.819 23:06:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.819 23:06:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.819 23:06:05 -- common/autotest_common.sh@10 -- # set +x 00:06:08.819 23:06:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.819 23:06:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.819 23:06:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.819 23:06:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.819 23:06:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.819 23:06:05 -- accel/accel.sh@42 -- # jq -r . 00:06:08.819 [2024-11-17 23:06:05.374719] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.819 [2024-11-17 23:06:05.374783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1284951 ] 00:06:08.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.079 [2024-11-17 23:06:05.442813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.079 [2024-11-17 23:06:05.524036] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.079 [2024-11-17 23:06:05.524140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.647 23:06:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.647 23:06:06 -- common/autotest_common.sh@862 -- # return 0 00:06:09.647 23:06:06 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:09.647 23:06:06 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:09.647 23:06:06 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:09.647 23:06:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.647 23:06:06 -- common/autotest_common.sh@10 -- # set +x 00:06:09.647 23:06:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.647 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.647 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.647 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.647 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.647 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.647 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.647 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.647 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.647 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.647 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.647 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # IFS== 00:06:09.648 23:06:06 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.648 23:06:06 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.648 23:06:06 -- accel/accel.sh@67 -- # killprocess 1284951 00:06:09.648 23:06:06 -- common/autotest_common.sh@936 -- # '[' -z 1284951 ']' 00:06:09.907 23:06:06 -- common/autotest_common.sh@940 -- # kill -0 1284951 00:06:09.907 23:06:06 -- common/autotest_common.sh@941 -- # uname 00:06:09.907 23:06:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.907 23:06:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1284951 00:06:09.907 23:06:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.907 23:06:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.907 23:06:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1284951' 00:06:09.907 killing process with pid 1284951 00:06:09.907 23:06:06 -- common/autotest_common.sh@955 -- # kill 1284951 00:06:09.907 23:06:06 -- common/autotest_common.sh@960 -- # wait 1284951 00:06:10.167 23:06:06 -- accel/accel.sh@68 -- # trap - ERR 00:06:10.167 23:06:06 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:10.167 23:06:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:10.167 23:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.167 23:06:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.167 23:06:06 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:10.167 23:06:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:10.167 23:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.167 23:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.167 23:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.167 23:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.167 23:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.167 23:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.167 23:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.167 23:06:06 -- accel/accel.sh@42 -- # jq -r . 00:06:10.167 23:06:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.167 23:06:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.167 23:06:06 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:10.167 23:06:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:10.167 23:06:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.167 23:06:06 -- common/autotest_common.sh@10 -- # set +x 00:06:10.167 ************************************ 00:06:10.167 START TEST accel_missing_filename 00:06:10.167 ************************************ 00:06:10.167 23:06:06 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:10.167 23:06:06 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.167 23:06:06 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:10.167 23:06:06 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.167 23:06:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.167 23:06:06 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.167 23:06:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.167 23:06:06 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:10.167 23:06:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:10.167 23:06:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.167 23:06:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.167 23:06:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.167 23:06:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.167 23:06:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.167 23:06:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.167 23:06:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.167 23:06:06 -- accel/accel.sh@42 -- # jq -r . 00:06:10.167 [2024-11-17 23:06:06.729030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.167 [2024-11-17 23:06:06.729110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285210 ] 00:06:10.167 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.427 [2024-11-17 23:06:06.798737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.427 [2024-11-17 23:06:06.868676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.427 [2024-11-17 23:06:06.907852] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.427 [2024-11-17 23:06:06.967100] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:10.427 A filename is required. 00:06:10.427 23:06:07 -- common/autotest_common.sh@653 -- # es=234 00:06:10.427 23:06:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.427 23:06:07 -- common/autotest_common.sh@662 -- # es=106 00:06:10.427 23:06:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.427 23:06:07 -- common/autotest_common.sh@670 -- # es=1 00:06:10.427 23:06:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.427 00:06:10.427 real 0m0.331s 00:06:10.427 user 0m0.242s 00:06:10.427 sys 0m0.125s 00:06:10.427 23:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.427 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.427 ************************************ 00:06:10.427 END TEST accel_missing_filename 00:06:10.427 ************************************ 00:06:10.687 23:06:07 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.687 23:06:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:10.687 23:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.687 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.687 ************************************ 00:06:10.687 START TEST accel_compress_verify 00:06:10.687 ************************************ 00:06:10.687 23:06:07 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.687 23:06:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.687 23:06:07 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.687 23:06:07 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.687 23:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.687 23:06:07 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.687 23:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.687 23:06:07 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.687 23:06:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:10.687 23:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.687 23:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.687 23:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.687 23:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.687 23:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.687 23:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.687 23:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.687 23:06:07 -- accel/accel.sh@42 -- # jq -r . 00:06:10.687 [2024-11-17 23:06:07.105895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:10.687 [2024-11-17 23:06:07.105982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285416 ] 00:06:10.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.687 [2024-11-17 23:06:07.176372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.687 [2024-11-17 23:06:07.243682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.687 [2024-11-17 23:06:07.283413] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.946 [2024-11-17 23:06:07.343710] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:10.946 00:06:10.947 Compression does not support the verify option, aborting. 00:06:10.947 23:06:07 -- common/autotest_common.sh@653 -- # es=161 00:06:10.947 23:06:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.947 23:06:07 -- common/autotest_common.sh@662 -- # es=33 00:06:10.947 23:06:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.947 23:06:07 -- common/autotest_common.sh@670 -- # es=1 00:06:10.947 23:06:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.947 00:06:10.947 real 0m0.330s 00:06:10.947 user 0m0.246s 00:06:10.947 sys 0m0.124s 00:06:10.947 23:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.947 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.947 ************************************ 00:06:10.947 END TEST accel_compress_verify 00:06:10.947 ************************************ 00:06:10.947 23:06:07 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:10.947 23:06:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:10.947 23:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.947 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.947 ************************************ 00:06:10.947 START TEST accel_wrong_workload 00:06:10.947 ************************************ 00:06:10.947 23:06:07 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:10.947 23:06:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.947 23:06:07 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:10.947 23:06:07 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.947 23:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.947 23:06:07 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.947 23:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.947 23:06:07 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:10.947 23:06:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:10.947 23:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.947 23:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.947 23:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.947 23:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.947 23:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.947 23:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.947 23:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.947 23:06:07 -- accel/accel.sh@42 -- # jq -r . 00:06:10.947 Unsupported workload type: foobar 00:06:10.947 [2024-11-17 23:06:07.474184] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:10.947 accel_perf options: 00:06:10.947 [-h help message] 00:06:10.947 [-q queue depth per core] 00:06:10.947 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.947 [-T number of threads per core 00:06:10.947 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.947 [-t time in seconds] 00:06:10.947 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.947 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.947 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.947 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.947 [-S for crc32c workload, use this seed value (default 0) 00:06:10.947 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.947 [-f for fill workload, use this BYTE value (default 255) 00:06:10.947 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.947 [-y verify result if this switch is on] 00:06:10.947 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.947 Can be used to spread operations across a wider range of memory. 00:06:10.947 23:06:07 -- common/autotest_common.sh@653 -- # es=1 00:06:10.947 23:06:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.947 23:06:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.947 23:06:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.947 00:06:10.947 real 0m0.028s 00:06:10.947 user 0m0.010s 00:06:10.947 sys 0m0.017s 00:06:10.947 23:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.947 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.947 ************************************ 00:06:10.947 END TEST accel_wrong_workload 00:06:10.947 ************************************ 00:06:10.947 Error: writing output failed: Broken pipe 00:06:10.947 23:06:07 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.947 23:06:07 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:10.947 23:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.947 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.947 ************************************ 00:06:10.947 START TEST accel_negative_buffers 00:06:10.947 ************************************ 00:06:10.947 23:06:07 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:10.947 23:06:07 -- common/autotest_common.sh@650 -- # local es=0 00:06:10.947 23:06:07 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:10.947 23:06:07 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:10.947 23:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.947 23:06:07 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:10.947 23:06:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:10.947 23:06:07 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:10.947 23:06:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:10.947 23:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.947 23:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.947 23:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.947 23:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.947 23:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.947 23:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.947 23:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.947 23:06:07 -- accel/accel.sh@42 -- # jq -r . 00:06:10.947 -x option must be non-negative. 00:06:10.947 [2024-11-17 23:06:07.552542] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:10.947 accel_perf options: 00:06:10.947 [-h help message] 00:06:10.947 [-q queue depth per core] 00:06:10.947 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:10.947 [-T number of threads per core 00:06:10.947 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:10.947 [-t time in seconds] 00:06:10.947 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:10.947 [ dif_verify, , dif_generate, dif_generate_copy 00:06:10.947 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:10.947 [-l for compress/decompress workloads, name of uncompressed input file 00:06:10.947 [-S for crc32c workload, use this seed value (default 0) 00:06:10.947 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:10.947 [-f for fill workload, use this BYTE value (default 255) 00:06:10.947 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:10.947 [-y verify result if this switch is on] 00:06:10.947 [-a tasks to allocate per core (default: same value as -q)] 00:06:10.947 Can be used to spread operations across a wider range of memory. 00:06:10.947 23:06:07 -- common/autotest_common.sh@653 -- # es=1 00:06:10.947 23:06:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.947 23:06:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:10.947 23:06:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.947 00:06:10.947 real 0m0.030s 00:06:10.947 user 0m0.015s 00:06:10.947 sys 0m0.015s 00:06:10.947 23:06:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.207 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 ************************************ 00:06:11.207 END TEST accel_negative_buffers 00:06:11.207 ************************************ 00:06:11.207 Error: writing output failed: Broken pipe 00:06:11.207 23:06:07 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:11.207 23:06:07 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:11.207 23:06:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.207 23:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:11.207 ************************************ 00:06:11.207 START TEST accel_crc32c 00:06:11.207 ************************************ 00:06:11.207 23:06:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:11.207 23:06:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.207 23:06:07 -- accel/accel.sh@17 -- # local accel_module 00:06:11.207 23:06:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:11.207 23:06:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.207 23:06:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:11.207 23:06:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.207 23:06:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.207 23:06:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.207 23:06:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.207 23:06:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.207 23:06:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.207 23:06:07 -- accel/accel.sh@42 -- # jq -r . 00:06:11.207 [2024-11-17 23:06:07.623576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.207 [2024-11-17 23:06:07.623656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285477 ] 00:06:11.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.207 [2024-11-17 23:06:07.693631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.207 [2024-11-17 23:06:07.762106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.586 23:06:08 -- accel/accel.sh@18 -- # out=' 00:06:12.586 SPDK Configuration: 00:06:12.586 Core mask: 0x1 00:06:12.586 00:06:12.586 Accel Perf Configuration: 00:06:12.586 Workload Type: crc32c 00:06:12.586 CRC-32C seed: 32 00:06:12.586 Transfer size: 4096 bytes 00:06:12.586 Vector count 1 00:06:12.586 Module: software 00:06:12.586 Queue depth: 32 00:06:12.586 Allocate depth: 32 00:06:12.586 # threads/core: 1 00:06:12.586 Run time: 1 seconds 00:06:12.586 Verify: Yes 00:06:12.586 00:06:12.586 Running for 1 seconds... 00:06:12.586 00:06:12.586 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.586 ------------------------------------------------------------------------------------ 00:06:12.586 0,0 839136/s 3277 MiB/s 0 0 00:06:12.586 ==================================================================================== 00:06:12.586 Total 839136/s 3277 MiB/s 0 0' 00:06:12.586 23:06:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.586 23:06:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.586 23:06:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:12.586 23:06:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:12.586 23:06:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.586 23:06:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.586 23:06:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.586 23:06:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.586 23:06:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.586 23:06:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.586 23:06:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.586 23:06:08 -- accel/accel.sh@42 -- # jq -r . 00:06:12.586 [2024-11-17 23:06:08.952722] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.586 [2024-11-17 23:06:08.952831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1285743 ] 00:06:12.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.586 [2024-11-17 23:06:09.022211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.586 [2024-11-17 23:06:09.087974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.586 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.586 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.586 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.586 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.586 23:06:09 -- accel/accel.sh@21 -- # val=0x1 00:06:12.586 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.586 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.586 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.586 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.586 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.586 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.586 23:06:09 -- accel/accel.sh@21 -- # val=crc32c 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val=32 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val=software 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val=32 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val=32 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val=1 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val=Yes 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:12.587 23:06:09 -- accel/accel.sh@21 -- # val= 00:06:12.587 23:06:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # IFS=: 00:06:12.587 23:06:09 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@21 -- # val= 00:06:13.965 23:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@21 -- # val= 00:06:13.965 23:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@21 -- # val= 00:06:13.965 23:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@21 -- # val= 00:06:13.965 23:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@21 -- # val= 00:06:13.965 23:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@21 -- # val= 00:06:13.965 23:06:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # IFS=: 00:06:13.965 23:06:10 -- accel/accel.sh@20 -- # read -r var val 00:06:13.965 23:06:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.965 23:06:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:13.965 23:06:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.965 00:06:13.965 real 0m2.665s 00:06:13.965 user 0m2.410s 00:06:13.965 sys 0m0.263s 00:06:13.965 23:06:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.965 23:06:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.965 ************************************ 00:06:13.965 END TEST accel_crc32c 00:06:13.965 ************************************ 00:06:13.965 23:06:10 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:13.965 23:06:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:13.965 23:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.965 23:06:10 -- common/autotest_common.sh@10 -- # set +x 00:06:13.965 ************************************ 00:06:13.965 START TEST accel_crc32c_C2 00:06:13.965 ************************************ 00:06:13.965 23:06:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:13.965 23:06:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.965 23:06:10 -- accel/accel.sh@17 -- # local accel_module 00:06:13.965 23:06:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:13.965 23:06:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:13.965 23:06:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.965 23:06:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.965 23:06:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.965 23:06:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.965 23:06:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.966 23:06:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.966 23:06:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.966 23:06:10 -- accel/accel.sh@42 -- # jq -r . 00:06:13.966 [2024-11-17 23:06:10.331340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.966 [2024-11-17 23:06:10.331412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286026 ] 00:06:13.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.966 [2024-11-17 23:06:10.401517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.966 [2024-11-17 23:06:10.470872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.346 23:06:11 -- accel/accel.sh@18 -- # out=' 00:06:15.346 SPDK Configuration: 00:06:15.346 Core mask: 0x1 00:06:15.346 00:06:15.346 Accel Perf Configuration: 00:06:15.346 Workload Type: crc32c 00:06:15.346 CRC-32C seed: 0 00:06:15.346 Transfer size: 4096 bytes 00:06:15.346 Vector count 2 00:06:15.346 Module: software 00:06:15.346 Queue depth: 32 00:06:15.346 Allocate depth: 32 00:06:15.346 # threads/core: 1 00:06:15.346 Run time: 1 seconds 00:06:15.346 Verify: Yes 00:06:15.346 00:06:15.346 Running for 1 seconds... 00:06:15.346 00:06:15.346 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.346 ------------------------------------------------------------------------------------ 00:06:15.346 0,0 598208/s 4673 MiB/s 0 0 00:06:15.346 ==================================================================================== 00:06:15.346 Total 598208/s 2336 MiB/s 0 0' 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:15.346 23:06:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:15.346 23:06:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.346 23:06:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.346 23:06:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.346 23:06:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.346 23:06:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.346 23:06:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.346 23:06:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.346 23:06:11 -- accel/accel.sh@42 -- # jq -r . 00:06:15.346 [2024-11-17 23:06:11.662108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.346 [2024-11-17 23:06:11.662196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286285 ] 00:06:15.346 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.346 [2024-11-17 23:06:11.732180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.346 [2024-11-17 23:06:11.798701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=0x1 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=crc32c 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=0 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=software 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=32 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=32 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=1 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val=Yes 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:15.346 23:06:11 -- accel/accel.sh@21 -- # val= 00:06:15.346 23:06:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # IFS=: 00:06:15.346 23:06:11 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.727 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.727 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.727 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.727 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.727 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@21 -- # val= 00:06:16.727 23:06:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # IFS=: 00:06:16.727 23:06:12 -- accel/accel.sh@20 -- # read -r var val 00:06:16.727 23:06:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:16.727 23:06:12 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:16.727 23:06:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.727 00:06:16.727 real 0m2.664s 00:06:16.727 user 0m2.411s 00:06:16.727 sys 0m0.261s 00:06:16.727 23:06:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.727 23:06:12 -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 ************************************ 00:06:16.727 END TEST accel_crc32c_C2 00:06:16.727 ************************************ 00:06:16.727 23:06:13 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:16.727 23:06:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:16.727 23:06:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.727 23:06:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.727 ************************************ 00:06:16.727 START TEST accel_copy 00:06:16.727 ************************************ 00:06:16.727 23:06:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:16.727 23:06:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:16.727 23:06:13 -- accel/accel.sh@17 -- # local accel_module 00:06:16.727 23:06:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:16.727 23:06:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:16.727 23:06:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.727 23:06:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.727 23:06:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.727 23:06:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.727 23:06:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.727 23:06:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.727 23:06:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.727 23:06:13 -- accel/accel.sh@42 -- # jq -r . 00:06:16.727 [2024-11-17 23:06:13.039385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:16.727 [2024-11-17 23:06:13.039471] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286488 ] 00:06:16.727 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.727 [2024-11-17 23:06:13.107679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.727 [2024-11-17 23:06:13.175616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.107 23:06:14 -- accel/accel.sh@18 -- # out=' 00:06:18.107 SPDK Configuration: 00:06:18.107 Core mask: 0x1 00:06:18.107 00:06:18.107 Accel Perf Configuration: 00:06:18.107 Workload Type: copy 00:06:18.107 Transfer size: 4096 bytes 00:06:18.107 Vector count 1 00:06:18.107 Module: software 00:06:18.107 Queue depth: 32 00:06:18.107 Allocate depth: 32 00:06:18.107 # threads/core: 1 00:06:18.107 Run time: 1 seconds 00:06:18.107 Verify: Yes 00:06:18.107 00:06:18.107 Running for 1 seconds... 00:06:18.107 00:06:18.107 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.107 ------------------------------------------------------------------------------------ 00:06:18.107 0,0 504096/s 1969 MiB/s 0 0 00:06:18.107 ==================================================================================== 00:06:18.107 Total 504096/s 1969 MiB/s 0 0' 00:06:18.107 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.107 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.107 23:06:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:18.107 23:06:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:18.107 23:06:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.107 23:06:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.107 23:06:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.107 23:06:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.107 23:06:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.107 23:06:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.107 23:06:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.107 23:06:14 -- accel/accel.sh@42 -- # jq -r . 00:06:18.107 [2024-11-17 23:06:14.365384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:18.107 [2024-11-17 23:06:14.365475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286655 ] 00:06:18.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.107 [2024-11-17 23:06:14.433147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.108 [2024-11-17 23:06:14.500514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=0x1 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=copy 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=software 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=32 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=32 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=1 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val=Yes 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:18.108 23:06:14 -- accel/accel.sh@21 -- # val= 00:06:18.108 23:06:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # IFS=: 00:06:18.108 23:06:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@21 -- # val= 00:06:19.489 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@21 -- # val= 00:06:19.489 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@21 -- # val= 00:06:19.489 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@21 -- # val= 00:06:19.489 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@21 -- # val= 00:06:19.489 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@21 -- # val= 00:06:19.489 23:06:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # IFS=: 00:06:19.489 23:06:15 -- accel/accel.sh@20 -- # read -r var val 00:06:19.489 23:06:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.489 23:06:15 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:19.489 23:06:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.489 00:06:19.489 real 0m2.655s 00:06:19.489 user 0m2.416s 00:06:19.489 sys 0m0.249s 00:06:19.489 23:06:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:19.489 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:06:19.489 ************************************ 00:06:19.489 END TEST accel_copy 00:06:19.489 ************************************ 00:06:19.489 23:06:15 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.489 23:06:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:19.489 23:06:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.489 23:06:15 -- common/autotest_common.sh@10 -- # set +x 00:06:19.489 ************************************ 00:06:19.489 START TEST accel_fill 00:06:19.489 ************************************ 00:06:19.489 23:06:15 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.489 23:06:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.489 23:06:15 -- accel/accel.sh@17 -- # local accel_module 00:06:19.489 23:06:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.489 23:06:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:19.489 23:06:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.489 23:06:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.489 23:06:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.489 23:06:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.489 23:06:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.489 23:06:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.489 23:06:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.489 23:06:15 -- accel/accel.sh@42 -- # jq -r . 00:06:19.489 [2024-11-17 23:06:15.736161] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.489 [2024-11-17 23:06:15.736247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286898 ] 00:06:19.489 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.489 [2024-11-17 23:06:15.807837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.489 [2024-11-17 23:06:15.876253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.869 23:06:17 -- accel/accel.sh@18 -- # out=' 00:06:20.870 SPDK Configuration: 00:06:20.870 Core mask: 0x1 00:06:20.870 00:06:20.870 Accel Perf Configuration: 00:06:20.870 Workload Type: fill 00:06:20.870 Fill pattern: 0x80 00:06:20.870 Transfer size: 4096 bytes 00:06:20.870 Vector count 1 00:06:20.870 Module: software 00:06:20.870 Queue depth: 64 00:06:20.870 Allocate depth: 64 00:06:20.870 # threads/core: 1 00:06:20.870 Run time: 1 seconds 00:06:20.870 Verify: Yes 00:06:20.870 00:06:20.870 Running for 1 seconds... 00:06:20.870 00:06:20.870 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:20.870 ------------------------------------------------------------------------------------ 00:06:20.870 0,0 971392/s 3794 MiB/s 0 0 00:06:20.870 ==================================================================================== 00:06:20.870 Total 971392/s 3794 MiB/s 0 0' 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.870 23:06:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:20.870 23:06:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.870 23:06:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.870 23:06:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.870 23:06:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.870 23:06:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.870 23:06:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.870 23:06:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.870 23:06:17 -- accel/accel.sh@42 -- # jq -r . 00:06:20.870 [2024-11-17 23:06:17.064691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:20.870 [2024-11-17 23:06:17.064777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287165 ] 00:06:20.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.870 [2024-11-17 23:06:17.132622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.870 [2024-11-17 23:06:17.199892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=0x1 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=fill 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=0x80 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=software 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=64 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=64 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=1 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val=Yes 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:20.870 23:06:17 -- accel/accel.sh@21 -- # val= 00:06:20.870 23:06:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # IFS=: 00:06:20.870 23:06:17 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@21 -- # val= 00:06:21.810 23:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@21 -- # val= 00:06:21.810 23:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@21 -- # val= 00:06:21.810 23:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@21 -- # val= 00:06:21.810 23:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@21 -- # val= 00:06:21.810 23:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@21 -- # val= 00:06:21.810 23:06:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # IFS=: 00:06:21.810 23:06:18 -- accel/accel.sh@20 -- # read -r var val 00:06:21.810 23:06:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.810 23:06:18 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:21.810 23:06:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.810 00:06:21.810 real 0m2.663s 00:06:21.810 user 0m2.405s 00:06:21.810 sys 0m0.266s 00:06:21.810 23:06:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.810 23:06:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.810 ************************************ 00:06:21.810 END TEST accel_fill 00:06:21.810 ************************************ 00:06:21.810 23:06:18 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:21.810 23:06:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.810 23:06:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.810 23:06:18 -- common/autotest_common.sh@10 -- # set +x 00:06:21.810 ************************************ 00:06:21.810 START TEST accel_copy_crc32c 00:06:21.810 ************************************ 00:06:21.810 23:06:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:21.810 23:06:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.810 23:06:18 -- accel/accel.sh@17 -- # local accel_module 00:06:22.070 23:06:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:22.070 23:06:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:22.070 23:06:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.070 23:06:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.070 23:06:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.070 23:06:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.070 23:06:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.070 23:06:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.070 23:06:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.070 23:06:18 -- accel/accel.sh@42 -- # jq -r . 00:06:22.070 [2024-11-17 23:06:18.442987] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:22.070 [2024-11-17 23:06:18.443065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287450 ] 00:06:22.070 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.070 [2024-11-17 23:06:18.511783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.070 [2024-11-17 23:06:18.579513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.463 23:06:19 -- accel/accel.sh@18 -- # out=' 00:06:23.463 SPDK Configuration: 00:06:23.463 Core mask: 0x1 00:06:23.463 00:06:23.463 Accel Perf Configuration: 00:06:23.463 Workload Type: copy_crc32c 00:06:23.463 CRC-32C seed: 0 00:06:23.463 Vector size: 4096 bytes 00:06:23.463 Transfer size: 4096 bytes 00:06:23.463 Vector count 1 00:06:23.463 Module: software 00:06:23.463 Queue depth: 32 00:06:23.463 Allocate depth: 32 00:06:23.463 # threads/core: 1 00:06:23.463 Run time: 1 seconds 00:06:23.463 Verify: Yes 00:06:23.463 00:06:23.463 Running for 1 seconds... 00:06:23.463 00:06:23.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.463 ------------------------------------------------------------------------------------ 00:06:23.464 0,0 435648/s 1701 MiB/s 0 0 00:06:23.464 ==================================================================================== 00:06:23.464 Total 435648/s 1701 MiB/s 0 0' 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:23.464 23:06:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:23.464 23:06:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.464 23:06:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.464 23:06:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.464 23:06:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.464 23:06:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.464 23:06:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.464 23:06:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.464 23:06:19 -- accel/accel.sh@42 -- # jq -r . 00:06:23.464 [2024-11-17 23:06:19.770294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.464 [2024-11-17 23:06:19.770380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287725 ] 00:06:23.464 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.464 [2024-11-17 23:06:19.839657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.464 [2024-11-17 23:06:19.906495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=0x1 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=0 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=software 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=32 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=32 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=1 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val=Yes 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:23.464 23:06:19 -- accel/accel.sh@21 -- # val= 00:06:23.464 23:06:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # IFS=: 00:06:23.464 23:06:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.842 23:06:21 -- accel/accel.sh@21 -- # val= 00:06:24.842 23:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.842 23:06:21 -- accel/accel.sh@20 -- # IFS=: 00:06:24.842 23:06:21 -- accel/accel.sh@20 -- # read -r var val 00:06:24.842 23:06:21 -- accel/accel.sh@21 -- # val= 00:06:24.842 23:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # IFS=: 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # read -r var val 00:06:24.843 23:06:21 -- accel/accel.sh@21 -- # val= 00:06:24.843 23:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # IFS=: 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # read -r var val 00:06:24.843 23:06:21 -- accel/accel.sh@21 -- # val= 00:06:24.843 23:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # IFS=: 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # read -r var val 00:06:24.843 23:06:21 -- accel/accel.sh@21 -- # val= 00:06:24.843 23:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # IFS=: 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # read -r var val 00:06:24.843 23:06:21 -- accel/accel.sh@21 -- # val= 00:06:24.843 23:06:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # IFS=: 00:06:24.843 23:06:21 -- accel/accel.sh@20 -- # read -r var val 00:06:24.843 23:06:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.843 23:06:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:24.843 23:06:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.843 00:06:24.843 real 0m2.661s 00:06:24.843 user 0m2.411s 00:06:24.843 sys 0m0.258s 00:06:24.843 23:06:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.843 23:06:21 -- common/autotest_common.sh@10 -- # set +x 00:06:24.843 ************************************ 00:06:24.843 END TEST accel_copy_crc32c 00:06:24.843 ************************************ 00:06:24.843 23:06:21 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:24.843 23:06:21 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:24.843 23:06:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.843 23:06:21 -- common/autotest_common.sh@10 -- # set +x 00:06:24.843 ************************************ 00:06:24.843 START TEST accel_copy_crc32c_C2 00:06:24.843 ************************************ 00:06:24.843 23:06:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:24.843 23:06:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.843 23:06:21 -- accel/accel.sh@17 -- # local accel_module 00:06:24.843 23:06:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:24.843 23:06:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:24.843 23:06:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.843 23:06:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.843 23:06:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.843 23:06:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.843 23:06:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.843 23:06:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.843 23:06:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.843 23:06:21 -- accel/accel.sh@42 -- # jq -r . 00:06:24.843 [2024-11-17 23:06:21.143751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:24.843 [2024-11-17 23:06:21.143838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288006 ] 00:06:24.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.843 [2024-11-17 23:06:21.213309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.843 [2024-11-17 23:06:21.280099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.222 23:06:22 -- accel/accel.sh@18 -- # out=' 00:06:26.222 SPDK Configuration: 00:06:26.222 Core mask: 0x1 00:06:26.222 00:06:26.222 Accel Perf Configuration: 00:06:26.222 Workload Type: copy_crc32c 00:06:26.222 CRC-32C seed: 0 00:06:26.222 Vector size: 4096 bytes 00:06:26.222 Transfer size: 8192 bytes 00:06:26.222 Vector count 2 00:06:26.222 Module: software 00:06:26.222 Queue depth: 32 00:06:26.222 Allocate depth: 32 00:06:26.222 # threads/core: 1 00:06:26.222 Run time: 1 seconds 00:06:26.222 Verify: Yes 00:06:26.222 00:06:26.222 Running for 1 seconds... 00:06:26.222 00:06:26.222 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.222 ------------------------------------------------------------------------------------ 00:06:26.222 0,0 299616/s 2340 MiB/s 0 0 00:06:26.222 ==================================================================================== 00:06:26.222 Total 299616/s 1170 MiB/s 0 0' 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:26.222 23:06:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:26.222 23:06:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.222 23:06:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.222 23:06:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.222 23:06:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.222 23:06:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.222 23:06:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.222 23:06:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.222 23:06:22 -- accel/accel.sh@42 -- # jq -r . 00:06:26.222 [2024-11-17 23:06:22.470250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:26.222 [2024-11-17 23:06:22.470336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288272 ] 00:06:26.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.222 [2024-11-17 23:06:22.538956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.222 [2024-11-17 23:06:22.605302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=0x1 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=0 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=software 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=32 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=32 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=1 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val=Yes 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:26.222 23:06:22 -- accel/accel.sh@21 -- # val= 00:06:26.222 23:06:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # IFS=: 00:06:26.222 23:06:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.600 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.600 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.600 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.600 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.600 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@21 -- # val= 00:06:27.600 23:06:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # IFS=: 00:06:27.600 23:06:23 -- accel/accel.sh@20 -- # read -r var val 00:06:27.600 23:06:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:27.600 23:06:23 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:27.600 23:06:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.600 00:06:27.600 real 0m2.660s 00:06:27.600 user 0m2.402s 00:06:27.600 sys 0m0.266s 00:06:27.600 23:06:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:27.600 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:27.600 ************************************ 00:06:27.600 END TEST accel_copy_crc32c_C2 00:06:27.600 ************************************ 00:06:27.600 23:06:23 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:27.600 23:06:23 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.600 23:06:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.600 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:27.600 ************************************ 00:06:27.600 START TEST accel_dualcast 00:06:27.600 ************************************ 00:06:27.600 23:06:23 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:27.600 23:06:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.600 23:06:23 -- accel/accel.sh@17 -- # local accel_module 00:06:27.600 23:06:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:27.600 23:06:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:27.600 23:06:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.600 23:06:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.600 23:06:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.600 23:06:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.600 23:06:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.600 23:06:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.600 23:06:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.600 23:06:23 -- accel/accel.sh@42 -- # jq -r . 00:06:27.600 [2024-11-17 23:06:23.842817] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.600 [2024-11-17 23:06:23.842902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288509 ] 00:06:27.600 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.600 [2024-11-17 23:06:23.913458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.600 [2024-11-17 23:06:23.980902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.979 23:06:25 -- accel/accel.sh@18 -- # out=' 00:06:28.979 SPDK Configuration: 00:06:28.979 Core mask: 0x1 00:06:28.979 00:06:28.979 Accel Perf Configuration: 00:06:28.979 Workload Type: dualcast 00:06:28.979 Transfer size: 4096 bytes 00:06:28.979 Vector count 1 00:06:28.979 Module: software 00:06:28.979 Queue depth: 32 00:06:28.979 Allocate depth: 32 00:06:28.979 # threads/core: 1 00:06:28.979 Run time: 1 seconds 00:06:28.979 Verify: Yes 00:06:28.979 00:06:28.979 Running for 1 seconds... 00:06:28.979 00:06:28.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.979 ------------------------------------------------------------------------------------ 00:06:28.979 0,0 632480/s 2470 MiB/s 0 0 00:06:28.979 ==================================================================================== 00:06:28.979 Total 632480/s 2470 MiB/s 0 0' 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:28.979 23:06:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:28.979 23:06:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.979 23:06:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.979 23:06:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.979 23:06:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.979 23:06:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.979 23:06:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.979 23:06:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.979 23:06:25 -- accel/accel.sh@42 -- # jq -r . 00:06:28.979 [2024-11-17 23:06:25.170426] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:28.979 [2024-11-17 23:06:25.170511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288676 ] 00:06:28.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.979 [2024-11-17 23:06:25.239536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.979 [2024-11-17 23:06:25.306354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=0x1 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=dualcast 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=software 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=32 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=32 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=1 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val=Yes 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:28.979 23:06:25 -- accel/accel.sh@21 -- # val= 00:06:28.979 23:06:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # IFS=: 00:06:28.979 23:06:25 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@21 -- # val= 00:06:29.915 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@21 -- # val= 00:06:29.915 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@21 -- # val= 00:06:29.915 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@21 -- # val= 00:06:29.915 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@21 -- # val= 00:06:29.915 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@21 -- # val= 00:06:29.915 23:06:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # IFS=: 00:06:29.915 23:06:26 -- accel/accel.sh@20 -- # read -r var val 00:06:29.915 23:06:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.915 23:06:26 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:29.915 23:06:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.915 00:06:29.915 real 0m2.662s 00:06:29.915 user 0m2.409s 00:06:29.915 sys 0m0.261s 00:06:29.915 23:06:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.915 23:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:29.915 ************************************ 00:06:29.916 END TEST accel_dualcast 00:06:29.916 ************************************ 00:06:29.916 23:06:26 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:29.916 23:06:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.916 23:06:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.916 23:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:29.916 ************************************ 00:06:29.916 START TEST accel_compare 00:06:29.916 ************************************ 00:06:29.916 23:06:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:29.916 23:06:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.916 23:06:26 -- accel/accel.sh@17 -- # local accel_module 00:06:30.176 23:06:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:30.176 23:06:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:30.176 23:06:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.176 23:06:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.176 23:06:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.176 23:06:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.176 23:06:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.176 23:06:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.176 23:06:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.176 23:06:26 -- accel/accel.sh@42 -- # jq -r . 00:06:30.176 [2024-11-17 23:06:26.548913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:30.176 [2024-11-17 23:06:26.548991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288884 ] 00:06:30.176 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.176 [2024-11-17 23:06:26.617233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.176 [2024-11-17 23:06:26.684639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.555 23:06:27 -- accel/accel.sh@18 -- # out=' 00:06:31.555 SPDK Configuration: 00:06:31.555 Core mask: 0x1 00:06:31.555 00:06:31.555 Accel Perf Configuration: 00:06:31.555 Workload Type: compare 00:06:31.555 Transfer size: 4096 bytes 00:06:31.555 Vector count 1 00:06:31.555 Module: software 00:06:31.555 Queue depth: 32 00:06:31.555 Allocate depth: 32 00:06:31.555 # threads/core: 1 00:06:31.555 Run time: 1 seconds 00:06:31.555 Verify: Yes 00:06:31.555 00:06:31.555 Running for 1 seconds... 00:06:31.555 00:06:31.555 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.555 ------------------------------------------------------------------------------------ 00:06:31.555 0,0 810656/s 3166 MiB/s 0 0 00:06:31.555 ==================================================================================== 00:06:31.555 Total 810656/s 3166 MiB/s 0 0' 00:06:31.555 23:06:27 -- accel/accel.sh@20 -- # IFS=: 00:06:31.555 23:06:27 -- accel/accel.sh@20 -- # read -r var val 00:06:31.555 23:06:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:31.555 23:06:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:31.556 23:06:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.556 23:06:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.556 23:06:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.556 23:06:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.556 23:06:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.556 23:06:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.556 23:06:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.556 23:06:27 -- accel/accel.sh@42 -- # jq -r . 00:06:31.556 [2024-11-17 23:06:27.875250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.556 [2024-11-17 23:06:27.875337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289135 ] 00:06:31.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.556 [2024-11-17 23:06:27.945265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.556 [2024-11-17 23:06:28.011101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=0x1 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=compare 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=software 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=32 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=32 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=1 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val=Yes 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:31.556 23:06:28 -- accel/accel.sh@21 -- # val= 00:06:31.556 23:06:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # IFS=: 00:06:31.556 23:06:28 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@21 -- # val= 00:06:32.935 23:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@21 -- # val= 00:06:32.935 23:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@21 -- # val= 00:06:32.935 23:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@21 -- # val= 00:06:32.935 23:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@21 -- # val= 00:06:32.935 23:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@21 -- # val= 00:06:32.935 23:06:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # IFS=: 00:06:32.935 23:06:29 -- accel/accel.sh@20 -- # read -r var val 00:06:32.935 23:06:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.935 23:06:29 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:32.935 23:06:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.935 00:06:32.935 real 0m2.659s 00:06:32.935 user 0m2.413s 00:06:32.935 sys 0m0.254s 00:06:32.935 23:06:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:32.935 23:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.935 ************************************ 00:06:32.935 END TEST accel_compare 00:06:32.935 ************************************ 00:06:32.935 23:06:29 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:32.935 23:06:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:32.935 23:06:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.935 23:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:32.935 ************************************ 00:06:32.935 START TEST accel_xor 00:06:32.935 ************************************ 00:06:32.935 23:06:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:32.935 23:06:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.935 23:06:29 -- accel/accel.sh@17 -- # local accel_module 00:06:32.935 23:06:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:32.935 23:06:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:32.935 23:06:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.935 23:06:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.935 23:06:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.935 23:06:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.935 23:06:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.935 23:06:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.935 23:06:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.935 23:06:29 -- accel/accel.sh@42 -- # jq -r . 00:06:32.935 [2024-11-17 23:06:29.235244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:32.935 [2024-11-17 23:06:29.235301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289423 ] 00:06:32.935 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.935 [2024-11-17 23:06:29.295053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.935 [2024-11-17 23:06:29.363066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.314 23:06:30 -- accel/accel.sh@18 -- # out=' 00:06:34.314 SPDK Configuration: 00:06:34.314 Core mask: 0x1 00:06:34.314 00:06:34.314 Accel Perf Configuration: 00:06:34.314 Workload Type: xor 00:06:34.314 Source buffers: 2 00:06:34.314 Transfer size: 4096 bytes 00:06:34.314 Vector count 1 00:06:34.314 Module: software 00:06:34.314 Queue depth: 32 00:06:34.314 Allocate depth: 32 00:06:34.314 # threads/core: 1 00:06:34.314 Run time: 1 seconds 00:06:34.314 Verify: Yes 00:06:34.314 00:06:34.314 Running for 1 seconds... 00:06:34.314 00:06:34.314 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.314 ------------------------------------------------------------------------------------ 00:06:34.314 0,0 693408/s 2708 MiB/s 0 0 00:06:34.314 ==================================================================================== 00:06:34.314 Total 693408/s 2708 MiB/s 0 0' 00:06:34.314 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.314 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.314 23:06:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:34.314 23:06:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:34.315 23:06:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.315 23:06:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.315 23:06:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.315 23:06:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.315 23:06:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.315 23:06:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.315 23:06:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.315 23:06:30 -- accel/accel.sh@42 -- # jq -r . 00:06:34.315 [2024-11-17 23:06:30.555990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:34.315 [2024-11-17 23:06:30.556084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289689 ] 00:06:34.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.315 [2024-11-17 23:06:30.626439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.315 [2024-11-17 23:06:30.692385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=0x1 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=xor 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=2 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=software 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=32 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=32 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=1 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val=Yes 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:34.315 23:06:30 -- accel/accel.sh@21 -- # val= 00:06:34.315 23:06:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # IFS=: 00:06:34.315 23:06:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.253 23:06:31 -- accel/accel.sh@21 -- # val= 00:06:35.253 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.253 23:06:31 -- accel/accel.sh@21 -- # val= 00:06:35.253 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.253 23:06:31 -- accel/accel.sh@21 -- # val= 00:06:35.253 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.253 23:06:31 -- accel/accel.sh@21 -- # val= 00:06:35.253 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.253 23:06:31 -- accel/accel.sh@21 -- # val= 00:06:35.253 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:06:35.253 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.512 23:06:31 -- accel/accel.sh@21 -- # val= 00:06:35.512 23:06:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.512 23:06:31 -- accel/accel.sh@20 -- # IFS=: 00:06:35.512 23:06:31 -- accel/accel.sh@20 -- # read -r var val 00:06:35.512 23:06:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.512 23:06:31 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:35.512 23:06:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.512 00:06:35.512 real 0m2.644s 00:06:35.512 user 0m2.405s 00:06:35.512 sys 0m0.248s 00:06:35.512 23:06:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:35.512 23:06:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.512 ************************************ 00:06:35.512 END TEST accel_xor 00:06:35.512 ************************************ 00:06:35.512 23:06:31 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:35.512 23:06:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:35.512 23:06:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.512 23:06:31 -- common/autotest_common.sh@10 -- # set +x 00:06:35.512 ************************************ 00:06:35.512 START TEST accel_xor 00:06:35.512 ************************************ 00:06:35.512 23:06:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:35.512 23:06:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.512 23:06:31 -- accel/accel.sh@17 -- # local accel_module 00:06:35.512 23:06:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:35.512 23:06:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:35.512 23:06:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.512 23:06:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.512 23:06:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.512 23:06:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.512 23:06:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.512 23:06:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.512 23:06:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.512 23:06:31 -- accel/accel.sh@42 -- # jq -r . 00:06:35.512 [2024-11-17 23:06:31.915075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.512 [2024-11-17 23:06:31.915124] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289973 ] 00:06:35.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.512 [2024-11-17 23:06:31.978759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.512 [2024-11-17 23:06:32.046774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.890 23:06:33 -- accel/accel.sh@18 -- # out=' 00:06:36.890 SPDK Configuration: 00:06:36.890 Core mask: 0x1 00:06:36.890 00:06:36.890 Accel Perf Configuration: 00:06:36.890 Workload Type: xor 00:06:36.890 Source buffers: 3 00:06:36.890 Transfer size: 4096 bytes 00:06:36.890 Vector count 1 00:06:36.890 Module: software 00:06:36.890 Queue depth: 32 00:06:36.890 Allocate depth: 32 00:06:36.890 # threads/core: 1 00:06:36.890 Run time: 1 seconds 00:06:36.890 Verify: Yes 00:06:36.890 00:06:36.890 Running for 1 seconds... 00:06:36.891 00:06:36.891 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:36.891 ------------------------------------------------------------------------------------ 00:06:36.891 0,0 675552/s 2638 MiB/s 0 0 00:06:36.891 ==================================================================================== 00:06:36.891 Total 675552/s 2638 MiB/s 0 0' 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:36.891 23:06:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:36.891 23:06:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.891 23:06:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.891 23:06:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.891 23:06:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.891 23:06:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.891 23:06:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.891 23:06:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.891 23:06:33 -- accel/accel.sh@42 -- # jq -r . 00:06:36.891 [2024-11-17 23:06:33.237079] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:36.891 [2024-11-17 23:06:33.237165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290252 ] 00:06:36.891 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.891 [2024-11-17 23:06:33.307012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.891 [2024-11-17 23:06:33.372654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=0x1 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=xor 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=3 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=software 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@23 -- # accel_module=software 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=32 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=32 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=1 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val=Yes 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:36.891 23:06:33 -- accel/accel.sh@21 -- # val= 00:06:36.891 23:06:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # IFS=: 00:06:36.891 23:06:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.310 23:06:34 -- accel/accel.sh@21 -- # val= 00:06:38.310 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.310 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.311 23:06:34 -- accel/accel.sh@21 -- # val= 00:06:38.311 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.311 23:06:34 -- accel/accel.sh@21 -- # val= 00:06:38.311 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.311 23:06:34 -- accel/accel.sh@21 -- # val= 00:06:38.311 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.311 23:06:34 -- accel/accel.sh@21 -- # val= 00:06:38.311 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.311 23:06:34 -- accel/accel.sh@21 -- # val= 00:06:38.311 23:06:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # IFS=: 00:06:38.311 23:06:34 -- accel/accel.sh@20 -- # read -r var val 00:06:38.311 23:06:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.311 23:06:34 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:38.311 23:06:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.311 00:06:38.311 real 0m2.641s 00:06:38.311 user 0m2.411s 00:06:38.311 sys 0m0.238s 00:06:38.311 23:06:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.311 23:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.311 ************************************ 00:06:38.311 END TEST accel_xor 00:06:38.311 ************************************ 00:06:38.311 23:06:34 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:38.311 23:06:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:38.311 23:06:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.311 23:06:34 -- common/autotest_common.sh@10 -- # set +x 00:06:38.311 ************************************ 00:06:38.311 START TEST accel_dif_verify 00:06:38.311 ************************************ 00:06:38.311 23:06:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:38.311 23:06:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.311 23:06:34 -- accel/accel.sh@17 -- # local accel_module 00:06:38.311 23:06:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:38.311 23:06:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:38.311 23:06:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.311 23:06:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.311 23:06:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.311 23:06:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.311 23:06:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.311 23:06:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.311 23:06:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.311 23:06:34 -- accel/accel.sh@42 -- # jq -r . 00:06:38.311 [2024-11-17 23:06:34.600878] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.311 [2024-11-17 23:06:34.600939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290488 ] 00:06:38.311 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.311 [2024-11-17 23:06:34.666005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.311 [2024-11-17 23:06:34.733243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.689 23:06:35 -- accel/accel.sh@18 -- # out=' 00:06:39.689 SPDK Configuration: 00:06:39.689 Core mask: 0x1 00:06:39.689 00:06:39.689 Accel Perf Configuration: 00:06:39.689 Workload Type: dif_verify 00:06:39.689 Vector size: 4096 bytes 00:06:39.689 Transfer size: 4096 bytes 00:06:39.689 Block size: 512 bytes 00:06:39.689 Metadata size: 8 bytes 00:06:39.689 Vector count 1 00:06:39.689 Module: software 00:06:39.689 Queue depth: 32 00:06:39.689 Allocate depth: 32 00:06:39.689 # threads/core: 1 00:06:39.689 Run time: 1 seconds 00:06:39.689 Verify: No 00:06:39.689 00:06:39.689 Running for 1 seconds... 00:06:39.689 00:06:39.689 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.689 ------------------------------------------------------------------------------------ 00:06:39.689 0,0 245696/s 974 MiB/s 0 0 00:06:39.689 ==================================================================================== 00:06:39.689 Total 245696/s 959 MiB/s 0 0' 00:06:39.689 23:06:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:39.689 23:06:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:39.689 23:06:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.689 23:06:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.689 23:06:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.689 23:06:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.689 23:06:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.689 23:06:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.689 23:06:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.689 23:06:35 -- accel/accel.sh@42 -- # jq -r . 00:06:39.689 [2024-11-17 23:06:35.922689] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:39.689 [2024-11-17 23:06:35.922765] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290647 ] 00:06:39.689 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.689 [2024-11-17 23:06:35.990589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.689 [2024-11-17 23:06:36.056370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=0x1 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=dif_verify 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=software 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=32 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=32 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=1 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val=No 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:39.689 23:06:36 -- accel/accel.sh@21 -- # val= 00:06:39.689 23:06:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # IFS=: 00:06:39.689 23:06:36 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@21 -- # val= 00:06:40.625 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@21 -- # val= 00:06:40.625 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@21 -- # val= 00:06:40.625 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@21 -- # val= 00:06:40.625 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@21 -- # val= 00:06:40.625 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@21 -- # val= 00:06:40.625 23:06:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # IFS=: 00:06:40.625 23:06:37 -- accel/accel.sh@20 -- # read -r var val 00:06:40.625 23:06:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:40.625 23:06:37 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:40.625 23:06:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.625 00:06:40.625 real 0m2.641s 00:06:40.625 user 0m2.405s 00:06:40.625 sys 0m0.248s 00:06:40.625 23:06:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:40.625 23:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:40.625 ************************************ 00:06:40.625 END TEST accel_dif_verify 00:06:40.626 ************************************ 00:06:40.885 23:06:37 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:40.885 23:06:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:40.885 23:06:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.885 23:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:40.885 ************************************ 00:06:40.885 START TEST accel_dif_generate 00:06:40.885 ************************************ 00:06:40.885 23:06:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:40.885 23:06:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.885 23:06:37 -- accel/accel.sh@17 -- # local accel_module 00:06:40.885 23:06:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:40.885 23:06:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:40.885 23:06:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.885 23:06:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.885 23:06:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.885 23:06:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.885 23:06:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.885 23:06:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.885 23:06:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.885 23:06:37 -- accel/accel.sh@42 -- # jq -r . 00:06:40.885 [2024-11-17 23:06:37.291781] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.885 [2024-11-17 23:06:37.291868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290858 ] 00:06:40.885 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.885 [2024-11-17 23:06:37.362299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.885 [2024-11-17 23:06:37.430028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.260 23:06:38 -- accel/accel.sh@18 -- # out=' 00:06:42.260 SPDK Configuration: 00:06:42.260 Core mask: 0x1 00:06:42.260 00:06:42.260 Accel Perf Configuration: 00:06:42.260 Workload Type: dif_generate 00:06:42.260 Vector size: 4096 bytes 00:06:42.260 Transfer size: 4096 bytes 00:06:42.261 Block size: 512 bytes 00:06:42.261 Metadata size: 8 bytes 00:06:42.261 Vector count 1 00:06:42.261 Module: software 00:06:42.261 Queue depth: 32 00:06:42.261 Allocate depth: 32 00:06:42.261 # threads/core: 1 00:06:42.261 Run time: 1 seconds 00:06:42.261 Verify: No 00:06:42.261 00:06:42.261 Running for 1 seconds... 00:06:42.261 00:06:42.261 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.261 ------------------------------------------------------------------------------------ 00:06:42.261 0,0 290336/s 1151 MiB/s 0 0 00:06:42.261 ==================================================================================== 00:06:42.261 Total 290336/s 1134 MiB/s 0 0' 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:42.261 23:06:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:42.261 23:06:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.261 23:06:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.261 23:06:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.261 23:06:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.261 23:06:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.261 23:06:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.261 23:06:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.261 23:06:38 -- accel/accel.sh@42 -- # jq -r . 00:06:42.261 [2024-11-17 23:06:38.620772] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.261 [2024-11-17 23:06:38.620860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291114 ] 00:06:42.261 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.261 [2024-11-17 23:06:38.690593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.261 [2024-11-17 23:06:38.756272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=0x1 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=dif_generate 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=software 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=32 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=32 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=1 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val=No 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:42.261 23:06:38 -- accel/accel.sh@21 -- # val= 00:06:42.261 23:06:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # IFS=: 00:06:42.261 23:06:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.637 23:06:39 -- accel/accel.sh@21 -- # val= 00:06:43.637 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.637 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:43.637 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.637 23:06:39 -- accel/accel.sh@21 -- # val= 00:06:43.637 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.637 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:43.637 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.637 23:06:39 -- accel/accel.sh@21 -- # val= 00:06:43.638 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.638 23:06:39 -- accel/accel.sh@21 -- # val= 00:06:43.638 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.638 23:06:39 -- accel/accel.sh@21 -- # val= 00:06:43.638 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.638 23:06:39 -- accel/accel.sh@21 -- # val= 00:06:43.638 23:06:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # IFS=: 00:06:43.638 23:06:39 -- accel/accel.sh@20 -- # read -r var val 00:06:43.638 23:06:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:43.638 23:06:39 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:43.638 23:06:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.638 00:06:43.638 real 0m2.660s 00:06:43.638 user 0m2.415s 00:06:43.638 sys 0m0.256s 00:06:43.638 23:06:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.638 23:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:43.638 ************************************ 00:06:43.638 END TEST accel_dif_generate 00:06:43.638 ************************************ 00:06:43.638 23:06:39 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:43.638 23:06:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:43.638 23:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.638 23:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:43.638 ************************************ 00:06:43.638 START TEST accel_dif_generate_copy 00:06:43.638 ************************************ 00:06:43.638 23:06:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:06:43.638 23:06:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.638 23:06:39 -- accel/accel.sh@17 -- # local accel_module 00:06:43.638 23:06:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:43.638 23:06:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:43.638 23:06:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.638 23:06:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.638 23:06:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.638 23:06:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.638 23:06:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.638 23:06:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.638 23:06:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.638 23:06:39 -- accel/accel.sh@42 -- # jq -r . 00:06:43.638 [2024-11-17 23:06:39.980826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:43.638 [2024-11-17 23:06:39.980875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291397 ] 00:06:43.638 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.638 [2024-11-17 23:06:40.047279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.638 [2024-11-17 23:06:40.127960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.015 23:06:41 -- accel/accel.sh@18 -- # out=' 00:06:45.015 SPDK Configuration: 00:06:45.015 Core mask: 0x1 00:06:45.015 00:06:45.015 Accel Perf Configuration: 00:06:45.015 Workload Type: dif_generate_copy 00:06:45.015 Vector size: 4096 bytes 00:06:45.015 Transfer size: 4096 bytes 00:06:45.015 Vector count 1 00:06:45.015 Module: software 00:06:45.015 Queue depth: 32 00:06:45.015 Allocate depth: 32 00:06:45.015 # threads/core: 1 00:06:45.015 Run time: 1 seconds 00:06:45.015 Verify: No 00:06:45.015 00:06:45.015 Running for 1 seconds... 00:06:45.015 00:06:45.015 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.015 ------------------------------------------------------------------------------------ 00:06:45.015 0,0 219232/s 869 MiB/s 0 0 00:06:45.015 ==================================================================================== 00:06:45.015 Total 219232/s 856 MiB/s 0 0' 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:45.015 23:06:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:45.015 23:06:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.015 23:06:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.015 23:06:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.015 23:06:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.015 23:06:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.015 23:06:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.015 23:06:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.015 23:06:41 -- accel/accel.sh@42 -- # jq -r . 00:06:45.015 [2024-11-17 23:06:41.317968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:45.015 [2024-11-17 23:06:41.318054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291674 ] 00:06:45.015 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.015 [2024-11-17 23:06:41.385652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.015 [2024-11-17 23:06:41.451361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=0x1 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=software 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=32 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=32 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=1 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val=No 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:45.015 23:06:41 -- accel/accel.sh@21 -- # val= 00:06:45.015 23:06:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # IFS=: 00:06:45.015 23:06:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.391 23:06:42 -- accel/accel.sh@21 -- # val= 00:06:46.392 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.392 23:06:42 -- accel/accel.sh@21 -- # val= 00:06:46.392 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.392 23:06:42 -- accel/accel.sh@21 -- # val= 00:06:46.392 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.392 23:06:42 -- accel/accel.sh@21 -- # val= 00:06:46.392 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.392 23:06:42 -- accel/accel.sh@21 -- # val= 00:06:46.392 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.392 23:06:42 -- accel/accel.sh@21 -- # val= 00:06:46.392 23:06:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # IFS=: 00:06:46.392 23:06:42 -- accel/accel.sh@20 -- # read -r var val 00:06:46.392 23:06:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.392 23:06:42 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:46.392 23:06:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.392 00:06:46.392 real 0m2.655s 00:06:46.392 user 0m2.409s 00:06:46.392 sys 0m0.253s 00:06:46.392 23:06:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.392 23:06:42 -- common/autotest_common.sh@10 -- # set +x 00:06:46.392 ************************************ 00:06:46.392 END TEST accel_dif_generate_copy 00:06:46.392 ************************************ 00:06:46.392 23:06:42 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:46.392 23:06:42 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:46.392 23:06:42 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:46.392 23:06:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.392 23:06:42 -- common/autotest_common.sh@10 -- # set +x 00:06:46.392 ************************************ 00:06:46.392 START TEST accel_comp 00:06:46.392 ************************************ 00:06:46.392 23:06:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:46.392 23:06:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.392 23:06:42 -- accel/accel.sh@17 -- # local accel_module 00:06:46.392 23:06:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:46.392 23:06:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:46.392 23:06:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.392 23:06:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.392 23:06:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.392 23:06:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.392 23:06:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.392 23:06:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.392 23:06:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.392 23:06:42 -- accel/accel.sh@42 -- # jq -r . 00:06:46.392 [2024-11-17 23:06:42.677246] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.392 [2024-11-17 23:06:42.677304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291955 ] 00:06:46.392 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.392 [2024-11-17 23:06:42.742155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.392 [2024-11-17 23:06:42.809098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.770 23:06:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:47.770 00:06:47.770 SPDK Configuration: 00:06:47.770 Core mask: 0x1 00:06:47.770 00:06:47.770 Accel Perf Configuration: 00:06:47.770 Workload Type: compress 00:06:47.770 Transfer size: 4096 bytes 00:06:47.770 Vector count 1 00:06:47.770 Module: software 00:06:47.770 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.770 Queue depth: 32 00:06:47.770 Allocate depth: 32 00:06:47.770 # threads/core: 1 00:06:47.770 Run time: 1 seconds 00:06:47.770 Verify: No 00:06:47.770 00:06:47.770 Running for 1 seconds... 00:06:47.770 00:06:47.770 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.770 ------------------------------------------------------------------------------------ 00:06:47.770 0,0 66528/s 277 MiB/s 0 0 00:06:47.770 ==================================================================================== 00:06:47.770 Total 66528/s 259 MiB/s 0 0' 00:06:47.770 23:06:43 -- accel/accel.sh@20 -- # IFS=: 00:06:47.770 23:06:43 -- accel/accel.sh@20 -- # read -r var val 00:06:47.770 23:06:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.770 23:06:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.770 23:06:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.770 23:06:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.770 23:06:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.770 23:06:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.770 23:06:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.770 23:06:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.770 23:06:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.770 23:06:43 -- accel/accel.sh@42 -- # jq -r . 00:06:47.770 [2024-11-17 23:06:44.000928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:47.770 [2024-11-17 23:06:44.001003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292223 ] 00:06:47.771 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.771 [2024-11-17 23:06:44.069286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.771 [2024-11-17 23:06:44.135558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=0x1 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=compress 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=software 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=32 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=32 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=1 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val=No 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:47.771 23:06:44 -- accel/accel.sh@21 -- # val= 00:06:47.771 23:06:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # IFS=: 00:06:47.771 23:06:44 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@21 -- # val= 00:06:48.707 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@21 -- # val= 00:06:48.707 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@21 -- # val= 00:06:48.707 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@21 -- # val= 00:06:48.707 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@21 -- # val= 00:06:48.707 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@21 -- # val= 00:06:48.707 23:06:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # IFS=: 00:06:48.707 23:06:45 -- accel/accel.sh@20 -- # read -r var val 00:06:48.707 23:06:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.707 23:06:45 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:48.707 23:06:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.707 00:06:48.707 real 0m2.647s 00:06:48.707 user 0m2.417s 00:06:48.707 sys 0m0.241s 00:06:48.707 23:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:48.707 23:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.707 ************************************ 00:06:48.707 END TEST accel_comp 00:06:48.707 ************************************ 00:06:48.966 23:06:45 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:48.966 23:06:45 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.966 23:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.966 23:06:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.966 ************************************ 00:06:48.966 START TEST accel_decomp 00:06:48.966 ************************************ 00:06:48.966 23:06:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:48.966 23:06:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.966 23:06:45 -- accel/accel.sh@17 -- # local accel_module 00:06:48.966 23:06:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:48.966 23:06:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:48.966 23:06:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.966 23:06:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.966 23:06:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.966 23:06:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.966 23:06:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.966 23:06:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.966 23:06:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.966 23:06:45 -- accel/accel.sh@42 -- # jq -r . 00:06:48.966 [2024-11-17 23:06:45.374309] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.966 [2024-11-17 23:06:45.374405] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292485 ] 00:06:48.966 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.966 [2024-11-17 23:06:45.443423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.966 [2024-11-17 23:06:45.510488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.341 23:06:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:50.341 00:06:50.341 SPDK Configuration: 00:06:50.341 Core mask: 0x1 00:06:50.341 00:06:50.341 Accel Perf Configuration: 00:06:50.341 Workload Type: decompress 00:06:50.341 Transfer size: 4096 bytes 00:06:50.341 Vector count 1 00:06:50.341 Module: software 00:06:50.341 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:50.341 Queue depth: 32 00:06:50.341 Allocate depth: 32 00:06:50.341 # threads/core: 1 00:06:50.341 Run time: 1 seconds 00:06:50.341 Verify: Yes 00:06:50.341 00:06:50.341 Running for 1 seconds... 00:06:50.341 00:06:50.341 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.341 ------------------------------------------------------------------------------------ 00:06:50.341 0,0 92128/s 169 MiB/s 0 0 00:06:50.341 ==================================================================================== 00:06:50.341 Total 92128/s 359 MiB/s 0 0' 00:06:50.341 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.341 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.341 23:06:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:50.341 23:06:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y 00:06:50.341 23:06:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.341 23:06:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.341 23:06:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.341 23:06:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.341 23:06:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.342 23:06:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.342 23:06:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.342 23:06:46 -- accel/accel.sh@42 -- # jq -r . 00:06:50.342 [2024-11-17 23:06:46.703367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.342 [2024-11-17 23:06:46.703454] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292656 ] 00:06:50.342 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.342 [2024-11-17 23:06:46.774034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.342 [2024-11-17 23:06:46.841218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=0x1 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=decompress 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=software 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=32 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=32 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=1 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val=Yes 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:50.342 23:06:46 -- accel/accel.sh@21 -- # val= 00:06:50.342 23:06:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # IFS=: 00:06:50.342 23:06:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@21 -- # val= 00:06:51.720 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@21 -- # val= 00:06:51.720 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@21 -- # val= 00:06:51.720 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@21 -- # val= 00:06:51.720 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@21 -- # val= 00:06:51.720 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@21 -- # val= 00:06:51.720 23:06:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # IFS=: 00:06:51.720 23:06:48 -- accel/accel.sh@20 -- # read -r var val 00:06:51.720 23:06:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.720 23:06:48 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:51.720 23:06:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.720 00:06:51.720 real 0m2.666s 00:06:51.720 user 0m2.413s 00:06:51.720 sys 0m0.264s 00:06:51.720 23:06:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.720 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 ************************************ 00:06:51.720 END TEST accel_decomp 00:06:51.720 ************************************ 00:06:51.720 23:06:48 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.720 23:06:48 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:51.720 23:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.720 23:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.720 ************************************ 00:06:51.720 START TEST accel_decmop_full 00:06:51.720 ************************************ 00:06:51.720 23:06:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.720 23:06:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.720 23:06:48 -- accel/accel.sh@17 -- # local accel_module 00:06:51.720 23:06:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.720 23:06:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.720 23:06:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.720 23:06:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.720 23:06:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.720 23:06:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.720 23:06:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.720 23:06:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.720 23:06:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.720 23:06:48 -- accel/accel.sh@42 -- # jq -r . 00:06:51.720 [2024-11-17 23:06:48.069731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:51.720 [2024-11-17 23:06:48.069791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292856 ] 00:06:51.720 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.720 [2024-11-17 23:06:48.133336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.720 [2024-11-17 23:06:48.200725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.098 23:06:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:53.098 00:06:53.098 SPDK Configuration: 00:06:53.098 Core mask: 0x1 00:06:53.098 00:06:53.098 Accel Perf Configuration: 00:06:53.098 Workload Type: decompress 00:06:53.098 Transfer size: 111250 bytes 00:06:53.098 Vector count 1 00:06:53.098 Module: software 00:06:53.098 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:53.098 Queue depth: 32 00:06:53.098 Allocate depth: 32 00:06:53.098 # threads/core: 1 00:06:53.098 Run time: 1 seconds 00:06:53.098 Verify: Yes 00:06:53.098 00:06:53.098 Running for 1 seconds... 00:06:53.098 00:06:53.098 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:53.098 ------------------------------------------------------------------------------------ 00:06:53.098 0,0 5952/s 245 MiB/s 0 0 00:06:53.098 ==================================================================================== 00:06:53.098 Total 5952/s 631 MiB/s 0 0' 00:06:53.098 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.098 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.098 23:06:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:53.098 23:06:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:53.098 23:06:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.098 23:06:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.098 23:06:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.098 23:06:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.098 23:06:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.098 23:06:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.098 23:06:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.098 23:06:49 -- accel/accel.sh@42 -- # jq -r . 00:06:53.098 [2024-11-17 23:06:49.401794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:53.099 [2024-11-17 23:06:49.401881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293083 ] 00:06:53.099 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.099 [2024-11-17 23:06:49.470333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.099 [2024-11-17 23:06:49.536399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=0x1 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=decompress 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=software 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=32 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=32 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=1 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val=Yes 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:53.099 23:06:49 -- accel/accel.sh@21 -- # val= 00:06:53.099 23:06:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # IFS=: 00:06:53.099 23:06:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.478 23:06:50 -- accel/accel.sh@21 -- # val= 00:06:54.478 23:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.478 23:06:50 -- accel/accel.sh@21 -- # val= 00:06:54.478 23:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.478 23:06:50 -- accel/accel.sh@21 -- # val= 00:06:54.478 23:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.478 23:06:50 -- accel/accel.sh@21 -- # val= 00:06:54.478 23:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.478 23:06:50 -- accel/accel.sh@21 -- # val= 00:06:54.478 23:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.478 23:06:50 -- accel/accel.sh@21 -- # val= 00:06:54.478 23:06:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.478 23:06:50 -- accel/accel.sh@20 -- # IFS=: 00:06:54.479 23:06:50 -- accel/accel.sh@20 -- # read -r var val 00:06:54.479 23:06:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.479 23:06:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:54.479 23:06:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.479 00:06:54.479 real 0m2.666s 00:06:54.479 user 0m2.422s 00:06:54.479 sys 0m0.250s 00:06:54.479 23:06:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.479 23:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:54.479 ************************************ 00:06:54.479 END TEST accel_decmop_full 00:06:54.479 ************************************ 00:06:54.479 23:06:50 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.479 23:06:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:54.479 23:06:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.479 23:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:54.479 ************************************ 00:06:54.479 START TEST accel_decomp_mcore 00:06:54.479 ************************************ 00:06:54.479 23:06:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.479 23:06:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.479 23:06:50 -- accel/accel.sh@17 -- # local accel_module 00:06:54.479 23:06:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.479 23:06:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.479 23:06:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.479 23:06:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.479 23:06:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.479 23:06:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.479 23:06:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.479 23:06:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.479 23:06:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.479 23:06:50 -- accel/accel.sh@42 -- # jq -r . 00:06:54.479 [2024-11-17 23:06:50.771751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.479 [2024-11-17 23:06:50.771801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293372 ] 00:06:54.479 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.479 [2024-11-17 23:06:50.835885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.479 [2024-11-17 23:06:50.905414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.479 [2024-11-17 23:06:50.905510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.479 [2024-11-17 23:06:50.905571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.479 [2024-11-17 23:06:50.905573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.857 23:06:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:55.857 00:06:55.857 SPDK Configuration: 00:06:55.857 Core mask: 0xf 00:06:55.857 00:06:55.857 Accel Perf Configuration: 00:06:55.857 Workload Type: decompress 00:06:55.857 Transfer size: 4096 bytes 00:06:55.857 Vector count 1 00:06:55.857 Module: software 00:06:55.857 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:55.857 Queue depth: 32 00:06:55.857 Allocate depth: 32 00:06:55.857 # threads/core: 1 00:06:55.857 Run time: 1 seconds 00:06:55.857 Verify: Yes 00:06:55.857 00:06:55.857 Running for 1 seconds... 00:06:55.857 00:06:55.857 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.857 ------------------------------------------------------------------------------------ 00:06:55.857 0,0 75936/s 139 MiB/s 0 0 00:06:55.857 3,0 76032/s 140 MiB/s 0 0 00:06:55.857 2,0 76000/s 140 MiB/s 0 0 00:06:55.857 1,0 75968/s 140 MiB/s 0 0 00:06:55.857 ==================================================================================== 00:06:55.857 Total 303936/s 1187 MiB/s 0 0' 00:06:55.857 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.857 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.857 23:06:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:55.858 23:06:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:55.858 23:06:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.858 23:06:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.858 23:06:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.858 23:06:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.858 23:06:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.858 23:06:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.858 23:06:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.858 23:06:52 -- accel/accel.sh@42 -- # jq -r . 00:06:55.858 [2024-11-17 23:06:52.109981] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:55.858 [2024-11-17 23:06:52.110069] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293641 ] 00:06:55.858 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.858 [2024-11-17 23:06:52.181284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.858 [2024-11-17 23:06:52.249695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.858 [2024-11-17 23:06:52.249791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.858 [2024-11-17 23:06:52.249877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.858 [2024-11-17 23:06:52.249879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=0xf 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=decompress 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=software 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=32 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=32 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=1 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val=Yes 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:55.858 23:06:52 -- accel/accel.sh@21 -- # val= 00:06:55.858 23:06:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # IFS=: 00:06:55.858 23:06:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@21 -- # val= 00:06:57.236 23:06:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.236 23:06:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.236 23:06:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.236 23:06:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:57.236 23:06:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.236 00:06:57.236 real 0m2.677s 00:06:57.236 user 0m9.084s 00:06:57.236 sys 0m0.264s 00:06:57.236 23:06:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.236 23:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 ************************************ 00:06:57.236 END TEST accel_decomp_mcore 00:06:57.236 ************************************ 00:06:57.236 23:06:53 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.236 23:06:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:57.236 23:06:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.236 23:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:57.236 ************************************ 00:06:57.236 START TEST accel_decomp_full_mcore 00:06:57.236 ************************************ 00:06:57.236 23:06:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.236 23:06:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:57.236 23:06:53 -- accel/accel.sh@17 -- # local accel_module 00:06:57.236 23:06:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.236 23:06:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.236 23:06:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.236 23:06:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.236 23:06:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.236 23:06:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.236 23:06:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.236 23:06:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.236 23:06:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.236 23:06:53 -- accel/accel.sh@42 -- # jq -r . 00:06:57.236 [2024-11-17 23:06:53.507083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:57.236 [2024-11-17 23:06:53.507169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293927 ] 00:06:57.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.236 [2024-11-17 23:06:53.576908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.236 [2024-11-17 23:06:53.646708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.236 [2024-11-17 23:06:53.646806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.236 [2024-11-17 23:06:53.646892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.236 [2024-11-17 23:06:53.646894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.614 23:06:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:58.614 00:06:58.614 SPDK Configuration: 00:06:58.614 Core mask: 0xf 00:06:58.614 00:06:58.614 Accel Perf Configuration: 00:06:58.614 Workload Type: decompress 00:06:58.614 Transfer size: 111250 bytes 00:06:58.614 Vector count 1 00:06:58.614 Module: software 00:06:58.614 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.614 Queue depth: 32 00:06:58.614 Allocate depth: 32 00:06:58.614 # threads/core: 1 00:06:58.614 Run time: 1 seconds 00:06:58.614 Verify: Yes 00:06:58.614 00:06:58.614 Running for 1 seconds... 00:06:58.614 00:06:58.614 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.614 ------------------------------------------------------------------------------------ 00:06:58.614 0,0 5792/s 239 MiB/s 0 0 00:06:58.614 3,0 5824/s 240 MiB/s 0 0 00:06:58.614 2,0 5824/s 240 MiB/s 0 0 00:06:58.614 1,0 5824/s 240 MiB/s 0 0 00:06:58.614 ==================================================================================== 00:06:58.614 Total 23264/s 2468 MiB/s 0 0' 00:06:58.614 23:06:54 -- accel/accel.sh@20 -- # IFS=: 00:06:58.614 23:06:54 -- accel/accel.sh@20 -- # read -r var val 00:06:58.614 23:06:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.614 23:06:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:58.614 23:06:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.614 23:06:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.614 23:06:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.614 23:06:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.614 23:06:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.614 23:06:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.614 23:06:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.614 23:06:54 -- accel/accel.sh@42 -- # jq -r . 00:06:58.614 [2024-11-17 23:06:54.854860] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:58.614 [2024-11-17 23:06:54.854946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294204 ] 00:06:58.614 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.614 [2024-11-17 23:06:54.924376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.614 [2024-11-17 23:06:54.993315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.614 [2024-11-17 23:06:54.993412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.614 [2024-11-17 23:06:54.993474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.614 [2024-11-17 23:06:54.993476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.614 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=0xf 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=decompress 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=software 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=32 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=32 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=1 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val=Yes 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.615 23:06:55 -- accel/accel.sh@21 -- # val= 00:06:58.615 23:06:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # IFS=: 00:06:58.615 23:06:55 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@21 -- # val= 00:06:59.993 23:06:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # IFS=: 00:06:59.993 23:06:56 -- accel/accel.sh@20 -- # read -r var val 00:06:59.993 23:06:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.993 23:06:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.993 23:06:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.993 00:06:59.993 real 0m2.704s 00:06:59.993 user 0m9.132s 00:06:59.993 sys 0m0.273s 00:06:59.993 23:06:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:59.993 23:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:59.993 ************************************ 00:06:59.993 END TEST accel_decomp_full_mcore 00:06:59.993 ************************************ 00:06:59.993 23:06:56 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.993 23:06:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:59.993 23:06:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.993 23:06:56 -- common/autotest_common.sh@10 -- # set +x 00:06:59.993 ************************************ 00:06:59.993 START TEST accel_decomp_mthread 00:06:59.993 ************************************ 00:06:59.993 23:06:56 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.993 23:06:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.993 23:06:56 -- accel/accel.sh@17 -- # local accel_module 00:06:59.993 23:06:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.993 23:06:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.993 23:06:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.993 23:06:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.993 23:06:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.993 23:06:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.993 23:06:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.993 23:06:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.993 23:06:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.993 23:06:56 -- accel/accel.sh@42 -- # jq -r . 00:06:59.993 [2024-11-17 23:06:56.237751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.993 [2024-11-17 23:06:56.237800] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294494 ] 00:06:59.993 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.993 [2024-11-17 23:06:56.300270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.993 [2024-11-17 23:06:56.367265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.929 23:06:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.929 00:07:00.929 SPDK Configuration: 00:07:00.929 Core mask: 0x1 00:07:00.929 00:07:00.929 Accel Perf Configuration: 00:07:00.929 Workload Type: decompress 00:07:00.929 Transfer size: 4096 bytes 00:07:00.929 Vector count 1 00:07:00.929 Module: software 00:07:00.929 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:00.929 Queue depth: 32 00:07:00.929 Allocate depth: 32 00:07:00.929 # threads/core: 2 00:07:00.929 Run time: 1 seconds 00:07:00.929 Verify: Yes 00:07:00.929 00:07:00.929 Running for 1 seconds... 00:07:00.929 00:07:00.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.929 ------------------------------------------------------------------------------------ 00:07:00.929 0,1 46912/s 86 MiB/s 0 0 00:07:00.929 0,0 46816/s 86 MiB/s 0 0 00:07:00.929 ==================================================================================== 00:07:00.929 Total 93728/s 366 MiB/s 0 0' 00:07:00.929 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.188 23:06:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:01.188 23:06:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.188 23:06:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.188 23:06:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.188 23:06:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.188 23:06:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.188 23:06:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.188 23:06:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.188 23:06:57 -- accel/accel.sh@42 -- # jq -r . 00:07:01.188 [2024-11-17 23:06:57.561649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.188 [2024-11-17 23:06:57.561735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294663 ] 00:07:01.188 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.188 [2024-11-17 23:06:57.631832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.188 [2024-11-17 23:06:57.698289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val=0x1 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val=decompress 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val=software 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.188 23:06:57 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:01.188 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.188 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val=32 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val=32 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val=2 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val=Yes 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:01.189 23:06:57 -- accel/accel.sh@21 -- # val= 00:07:01.189 23:06:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # IFS=: 00:07:01.189 23:06:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.567 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.567 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.567 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.567 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.567 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.567 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.567 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.567 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.567 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.567 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.568 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.568 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.568 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.568 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.568 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.568 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.568 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.568 23:06:58 -- accel/accel.sh@21 -- # val= 00:07:02.568 23:06:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.568 23:06:58 -- accel/accel.sh@20 -- # IFS=: 00:07:02.568 23:06:58 -- accel/accel.sh@20 -- # read -r var val 00:07:02.568 23:06:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.568 23:06:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:02.568 23:06:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.568 00:07:02.568 real 0m2.652s 00:07:02.568 user 0m2.414s 00:07:02.568 sys 0m0.249s 00:07:02.568 23:06:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:02.568 23:06:58 -- common/autotest_common.sh@10 -- # set +x 00:07:02.568 ************************************ 00:07:02.568 END TEST accel_decomp_mthread 00:07:02.568 ************************************ 00:07:02.568 23:06:58 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.568 23:06:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:02.568 23:06:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.568 23:06:58 -- common/autotest_common.sh@10 -- # set +x 00:07:02.568 ************************************ 00:07:02.568 START TEST accel_deomp_full_mthread 00:07:02.568 ************************************ 00:07:02.568 23:06:58 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.568 23:06:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.568 23:06:58 -- accel/accel.sh@17 -- # local accel_module 00:07:02.568 23:06:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.568 23:06:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:02.568 23:06:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.568 23:06:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.568 23:06:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.568 23:06:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.568 23:06:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.568 23:06:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.568 23:06:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.568 23:06:58 -- accel/accel.sh@42 -- # jq -r . 00:07:02.568 [2024-11-17 23:06:58.946291] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:02.568 [2024-11-17 23:06:58.946377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294878 ] 00:07:02.568 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.568 [2024-11-17 23:06:59.017565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.568 [2024-11-17 23:06:59.085437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.945 23:07:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.945 00:07:03.945 SPDK Configuration: 00:07:03.945 Core mask: 0x1 00:07:03.945 00:07:03.945 Accel Perf Configuration: 00:07:03.945 Workload Type: decompress 00:07:03.945 Transfer size: 111250 bytes 00:07:03.945 Vector count 1 00:07:03.945 Module: software 00:07:03.945 File Name: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.945 Queue depth: 32 00:07:03.945 Allocate depth: 32 00:07:03.945 # threads/core: 2 00:07:03.945 Run time: 1 seconds 00:07:03.945 Verify: Yes 00:07:03.945 00:07:03.945 Running for 1 seconds... 00:07:03.945 00:07:03.945 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.945 ------------------------------------------------------------------------------------ 00:07:03.945 0,1 2944/s 121 MiB/s 0 0 00:07:03.945 0,0 2912/s 120 MiB/s 0 0 00:07:03.945 ==================================================================================== 00:07:03.945 Total 5856/s 621 MiB/s 0 0' 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.945 23:07:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.945 23:07:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.945 23:07:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.945 23:07:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.945 23:07:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.945 23:07:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.945 23:07:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.945 23:07:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.945 23:07:00 -- accel/accel.sh@42 -- # jq -r . 00:07:03.945 [2024-11-17 23:07:00.303422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.945 [2024-11-17 23:07:00.303511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295071 ] 00:07:03.945 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.945 [2024-11-17 23:07:00.375363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.945 [2024-11-17 23:07:00.443513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=0x1 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=decompress 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=software 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/bib 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=32 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=32 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=2 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val=Yes 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.945 23:07:00 -- accel/accel.sh@21 -- # val= 00:07:03.945 23:07:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.945 23:07:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@21 -- # val= 00:07:05.322 23:07:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # IFS=: 00:07:05.322 23:07:01 -- accel/accel.sh@20 -- # read -r var val 00:07:05.322 23:07:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.322 23:07:01 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:05.322 23:07:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.322 00:07:05.322 real 0m2.719s 00:07:05.322 user 0m2.466s 00:07:05.322 sys 0m0.261s 00:07:05.322 23:07:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.322 23:07:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.322 ************************************ 00:07:05.322 END TEST accel_deomp_full_mthread 00:07:05.322 ************************************ 00:07:05.322 23:07:01 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:05.322 23:07:01 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:05.322 23:07:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:05.322 23:07:01 -- accel/accel.sh@129 -- # build_accel_config 00:07:05.322 23:07:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.322 23:07:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.322 23:07:01 -- common/autotest_common.sh@10 -- # set +x 00:07:05.322 23:07:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.322 23:07:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.322 23:07:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.322 23:07:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.322 23:07:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.322 23:07:01 -- accel/accel.sh@42 -- # jq -r . 00:07:05.322 ************************************ 00:07:05.322 START TEST accel_dif_functional_tests 00:07:05.322 ************************************ 00:07:05.322 23:07:01 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:05.322 [2024-11-17 23:07:01.705170] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.322 [2024-11-17 23:07:01.705235] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295358 ] 00:07:05.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.322 [2024-11-17 23:07:01.773039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.322 [2024-11-17 23:07:01.842439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.322 [2024-11-17 23:07:01.842541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.322 [2024-11-17 23:07:01.842545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.322 00:07:05.322 00:07:05.322 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.322 http://cunit.sourceforge.net/ 00:07:05.322 00:07:05.322 00:07:05.322 Suite: accel_dif 00:07:05.322 Test: verify: DIF generated, GUARD check ...passed 00:07:05.322 Test: verify: DIF generated, APPTAG check ...passed 00:07:05.322 Test: verify: DIF generated, REFTAG check ...passed 00:07:05.322 Test: verify: DIF not generated, GUARD check ...[2024-11-17 23:07:01.911433] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.322 [2024-11-17 23:07:01.911484] dif.c: 779:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.322 passed 00:07:05.322 Test: verify: DIF not generated, APPTAG check ...[2024-11-17 23:07:01.911519] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.322 [2024-11-17 23:07:01.911544] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.322 passed 00:07:05.322 Test: verify: DIF not generated, REFTAG check ...[2024-11-17 23:07:01.911566] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.322 [2024-11-17 23:07:01.911585] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.322 passed 00:07:05.322 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:05.322 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-17 23:07:01.911630] dif.c: 794:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:05.322 passed 00:07:05.322 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:05.322 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:05.322 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:05.322 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-17 23:07:01.911732] dif.c: 815:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:05.322 passed 00:07:05.322 Test: generate copy: DIF generated, GUARD check ...passed 00:07:05.323 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:05.323 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:05.323 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:05.323 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:05.323 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:05.323 Test: generate copy: iovecs-len validate ...[2024-11-17 23:07:01.911910] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:05.323 passed 00:07:05.323 Test: generate copy: buffer alignment validate ...passed 00:07:05.323 00:07:05.323 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.323 suites 1 1 n/a 0 0 00:07:05.323 tests 20 20 20 0 0 00:07:05.323 asserts 204 204 204 0 n/a 00:07:05.323 00:07:05.323 Elapsed time = 0.002 seconds 00:07:05.582 00:07:05.582 real 0m0.391s 00:07:05.582 user 0m0.599s 00:07:05.582 sys 0m0.149s 00:07:05.582 23:07:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.582 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 ************************************ 00:07:05.582 END TEST accel_dif_functional_tests 00:07:05.582 ************************************ 00:07:05.582 00:07:05.582 real 0m56.939s 00:07:05.582 user 1m4.753s 00:07:05.582 sys 0m6.938s 00:07:05.582 23:07:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:05.582 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 ************************************ 00:07:05.582 END TEST accel 00:07:05.582 ************************************ 00:07:05.582 23:07:02 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:05.582 23:07:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.582 23:07:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.582 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.582 ************************************ 00:07:05.582 START TEST accel_rpc 00:07:05.582 ************************************ 00:07:05.582 23:07:02 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:05.841 * Looking for test storage... 00:07:05.841 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/accel 00:07:05.841 23:07:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:05.841 23:07:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:05.841 23:07:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:05.841 23:07:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:05.841 23:07:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:05.841 23:07:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:05.841 23:07:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:05.841 23:07:02 -- scripts/common.sh@335 -- # IFS=.-: 00:07:05.841 23:07:02 -- scripts/common.sh@335 -- # read -ra ver1 00:07:05.841 23:07:02 -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.841 23:07:02 -- scripts/common.sh@336 -- # read -ra ver2 00:07:05.841 23:07:02 -- scripts/common.sh@337 -- # local 'op=<' 00:07:05.841 23:07:02 -- scripts/common.sh@339 -- # ver1_l=2 00:07:05.841 23:07:02 -- scripts/common.sh@340 -- # ver2_l=1 00:07:05.841 23:07:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:05.841 23:07:02 -- scripts/common.sh@343 -- # case "$op" in 00:07:05.841 23:07:02 -- scripts/common.sh@344 -- # : 1 00:07:05.841 23:07:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:05.841 23:07:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.841 23:07:02 -- scripts/common.sh@364 -- # decimal 1 00:07:05.841 23:07:02 -- scripts/common.sh@352 -- # local d=1 00:07:05.841 23:07:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.841 23:07:02 -- scripts/common.sh@354 -- # echo 1 00:07:05.841 23:07:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:05.841 23:07:02 -- scripts/common.sh@365 -- # decimal 2 00:07:05.841 23:07:02 -- scripts/common.sh@352 -- # local d=2 00:07:05.841 23:07:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.841 23:07:02 -- scripts/common.sh@354 -- # echo 2 00:07:05.841 23:07:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:05.841 23:07:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:05.841 23:07:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:05.841 23:07:02 -- scripts/common.sh@367 -- # return 0 00:07:05.841 23:07:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.841 23:07:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:05.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.841 --rc genhtml_branch_coverage=1 00:07:05.841 --rc genhtml_function_coverage=1 00:07:05.841 --rc genhtml_legend=1 00:07:05.841 --rc geninfo_all_blocks=1 00:07:05.841 --rc geninfo_unexecuted_blocks=1 00:07:05.841 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.841 ' 00:07:05.841 23:07:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:05.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.841 --rc genhtml_branch_coverage=1 00:07:05.841 --rc genhtml_function_coverage=1 00:07:05.841 --rc genhtml_legend=1 00:07:05.841 --rc geninfo_all_blocks=1 00:07:05.841 --rc geninfo_unexecuted_blocks=1 00:07:05.841 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.841 ' 00:07:05.841 23:07:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:05.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.841 --rc genhtml_branch_coverage=1 00:07:05.841 --rc genhtml_function_coverage=1 00:07:05.841 --rc genhtml_legend=1 00:07:05.841 --rc geninfo_all_blocks=1 00:07:05.841 --rc geninfo_unexecuted_blocks=1 00:07:05.841 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.841 ' 00:07:05.841 23:07:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:05.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.842 --rc genhtml_branch_coverage=1 00:07:05.842 --rc genhtml_function_coverage=1 00:07:05.842 --rc genhtml_legend=1 00:07:05.842 --rc geninfo_all_blocks=1 00:07:05.842 --rc geninfo_unexecuted_blocks=1 00:07:05.842 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:05.842 ' 00:07:05.842 23:07:02 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.842 23:07:02 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1295662 00:07:05.842 23:07:02 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:05.842 23:07:02 -- accel/accel_rpc.sh@15 -- # waitforlisten 1295662 00:07:05.842 23:07:02 -- common/autotest_common.sh@829 -- # '[' -z 1295662 ']' 00:07:05.842 23:07:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.842 23:07:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.842 23:07:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.842 23:07:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.842 23:07:02 -- common/autotest_common.sh@10 -- # set +x 00:07:05.842 [2024-11-17 23:07:02.375117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.842 [2024-11-17 23:07:02.375189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1295662 ] 00:07:05.842 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.842 [2024-11-17 23:07:02.442830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.101 [2024-11-17 23:07:02.517419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.101 [2024-11-17 23:07:02.517523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.668 23:07:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.668 23:07:03 -- common/autotest_common.sh@862 -- # return 0 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:06.668 23:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.668 23:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.668 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.668 ************************************ 00:07:06.668 START TEST accel_assign_opcode 00:07:06.668 ************************************ 00:07:06.668 23:07:03 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:06.668 23:07:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.668 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.668 [2024-11-17 23:07:03.219611] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:06.668 23:07:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:06.668 23:07:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.668 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.668 [2024-11-17 23:07:03.227626] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:06.668 23:07:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.668 23:07:03 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:06.668 23:07:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.668 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.927 23:07:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.927 23:07:03 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:06.927 23:07:03 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:06.927 23:07:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.927 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.927 23:07:03 -- accel/accel_rpc.sh@42 -- # grep software 00:07:06.927 23:07:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.927 software 00:07:06.927 00:07:06.927 real 0m0.229s 00:07:06.927 user 0m0.042s 00:07:06.927 sys 0m0.010s 00:07:06.927 23:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.927 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:06.927 ************************************ 00:07:06.927 END TEST accel_assign_opcode 00:07:06.927 ************************************ 00:07:06.927 23:07:03 -- accel/accel_rpc.sh@55 -- # killprocess 1295662 00:07:06.927 23:07:03 -- common/autotest_common.sh@936 -- # '[' -z 1295662 ']' 00:07:06.927 23:07:03 -- common/autotest_common.sh@940 -- # kill -0 1295662 00:07:06.927 23:07:03 -- common/autotest_common.sh@941 -- # uname 00:07:06.927 23:07:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.927 23:07:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1295662 00:07:06.927 23:07:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.927 23:07:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.927 23:07:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1295662' 00:07:06.927 killing process with pid 1295662 00:07:06.927 23:07:03 -- common/autotest_common.sh@955 -- # kill 1295662 00:07:06.927 23:07:03 -- common/autotest_common.sh@960 -- # wait 1295662 00:07:07.494 00:07:07.494 real 0m1.671s 00:07:07.494 user 0m1.697s 00:07:07.494 sys 0m0.468s 00:07:07.494 23:07:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.494 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 ************************************ 00:07:07.494 END TEST accel_rpc 00:07:07.494 ************************************ 00:07:07.494 23:07:03 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:07.494 23:07:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.494 23:07:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.494 23:07:03 -- common/autotest_common.sh@10 -- # set +x 00:07:07.494 ************************************ 00:07:07.494 START TEST app_cmdline 00:07:07.494 ************************************ 00:07:07.494 23:07:03 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:07:07.494 * Looking for test storage... 00:07:07.494 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:07.494 23:07:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:07.494 23:07:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:07.494 23:07:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:07.494 23:07:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:07.494 23:07:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:07.494 23:07:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:07.494 23:07:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:07.494 23:07:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:07.494 23:07:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:07.494 23:07:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.494 23:07:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:07.494 23:07:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:07.494 23:07:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:07.494 23:07:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:07.494 23:07:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:07.494 23:07:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:07.495 23:07:04 -- scripts/common.sh@344 -- # : 1 00:07:07.495 23:07:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:07.495 23:07:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.495 23:07:04 -- scripts/common.sh@364 -- # decimal 1 00:07:07.495 23:07:04 -- scripts/common.sh@352 -- # local d=1 00:07:07.495 23:07:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.495 23:07:04 -- scripts/common.sh@354 -- # echo 1 00:07:07.495 23:07:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:07.495 23:07:04 -- scripts/common.sh@365 -- # decimal 2 00:07:07.495 23:07:04 -- scripts/common.sh@352 -- # local d=2 00:07:07.495 23:07:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.495 23:07:04 -- scripts/common.sh@354 -- # echo 2 00:07:07.495 23:07:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:07.495 23:07:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:07.495 23:07:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:07.495 23:07:04 -- scripts/common.sh@367 -- # return 0 00:07:07.495 23:07:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.495 23:07:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:07.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.495 --rc genhtml_branch_coverage=1 00:07:07.495 --rc genhtml_function_coverage=1 00:07:07.495 --rc genhtml_legend=1 00:07:07.495 --rc geninfo_all_blocks=1 00:07:07.495 --rc geninfo_unexecuted_blocks=1 00:07:07.495 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.495 ' 00:07:07.495 23:07:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:07.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.495 --rc genhtml_branch_coverage=1 00:07:07.495 --rc genhtml_function_coverage=1 00:07:07.495 --rc genhtml_legend=1 00:07:07.495 --rc geninfo_all_blocks=1 00:07:07.495 --rc geninfo_unexecuted_blocks=1 00:07:07.495 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.495 ' 00:07:07.495 23:07:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:07.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.495 --rc genhtml_branch_coverage=1 00:07:07.495 --rc genhtml_function_coverage=1 00:07:07.495 --rc genhtml_legend=1 00:07:07.495 --rc geninfo_all_blocks=1 00:07:07.495 --rc geninfo_unexecuted_blocks=1 00:07:07.495 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.495 ' 00:07:07.495 23:07:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:07.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.495 --rc genhtml_branch_coverage=1 00:07:07.495 --rc genhtml_function_coverage=1 00:07:07.495 --rc genhtml_legend=1 00:07:07.495 --rc geninfo_all_blocks=1 00:07:07.495 --rc geninfo_unexecuted_blocks=1 00:07:07.495 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:07.495 ' 00:07:07.495 23:07:04 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:07.495 23:07:04 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1296035 00:07:07.495 23:07:04 -- app/cmdline.sh@18 -- # waitforlisten 1296035 00:07:07.495 23:07:04 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:07.495 23:07:04 -- common/autotest_common.sh@829 -- # '[' -z 1296035 ']' 00:07:07.495 23:07:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.495 23:07:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.495 23:07:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.495 23:07:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.495 23:07:04 -- common/autotest_common.sh@10 -- # set +x 00:07:07.495 [2024-11-17 23:07:04.080384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.495 [2024-11-17 23:07:04.080471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296035 ] 00:07:07.753 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.753 [2024-11-17 23:07:04.149202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.753 [2024-11-17 23:07:04.222785] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:07.753 [2024-11-17 23:07:04.222905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.375 23:07:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.375 23:07:04 -- common/autotest_common.sh@862 -- # return 0 00:07:08.375 23:07:04 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:08.633 { 00:07:08.633 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:08.633 "fields": { 00:07:08.633 "major": 24, 00:07:08.633 "minor": 1, 00:07:08.633 "patch": 1, 00:07:08.633 "suffix": "-pre", 00:07:08.633 "commit": "c13c99a5e" 00:07:08.633 } 00:07:08.633 } 00:07:08.633 23:07:05 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:08.633 23:07:05 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:08.633 23:07:05 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:08.633 23:07:05 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:08.633 23:07:05 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:08.633 23:07:05 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:08.633 23:07:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.633 23:07:05 -- app/cmdline.sh@26 -- # sort 00:07:08.633 23:07:05 -- common/autotest_common.sh@10 -- # set +x 00:07:08.633 23:07:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.633 23:07:05 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:08.633 23:07:05 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:08.633 23:07:05 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.633 23:07:05 -- common/autotest_common.sh@650 -- # local es=0 00:07:08.633 23:07:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.633 23:07:05 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:08.633 23:07:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.633 23:07:05 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:08.633 23:07:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.633 23:07:05 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:08.633 23:07:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.633 23:07:05 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:07:08.633 23:07:05 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:07:08.633 23:07:05 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.891 request: 00:07:08.891 { 00:07:08.891 "method": "env_dpdk_get_mem_stats", 00:07:08.891 "req_id": 1 00:07:08.891 } 00:07:08.891 Got JSON-RPC error response 00:07:08.891 response: 00:07:08.891 { 00:07:08.891 "code": -32601, 00:07:08.891 "message": "Method not found" 00:07:08.891 } 00:07:08.891 23:07:05 -- common/autotest_common.sh@653 -- # es=1 00:07:08.891 23:07:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.891 23:07:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.891 23:07:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.891 23:07:05 -- app/cmdline.sh@1 -- # killprocess 1296035 00:07:08.891 23:07:05 -- common/autotest_common.sh@936 -- # '[' -z 1296035 ']' 00:07:08.891 23:07:05 -- common/autotest_common.sh@940 -- # kill -0 1296035 00:07:08.891 23:07:05 -- common/autotest_common.sh@941 -- # uname 00:07:08.891 23:07:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.891 23:07:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1296035 00:07:08.891 23:07:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.891 23:07:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.891 23:07:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1296035' 00:07:08.891 killing process with pid 1296035 00:07:08.891 23:07:05 -- common/autotest_common.sh@955 -- # kill 1296035 00:07:08.891 23:07:05 -- common/autotest_common.sh@960 -- # wait 1296035 00:07:09.150 00:07:09.150 real 0m1.782s 00:07:09.150 user 0m2.096s 00:07:09.150 sys 0m0.494s 00:07:09.150 23:07:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.150 23:07:05 -- common/autotest_common.sh@10 -- # set +x 00:07:09.150 ************************************ 00:07:09.150 END TEST app_cmdline 00:07:09.150 ************************************ 00:07:09.150 23:07:05 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:09.150 23:07:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.150 23:07:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.150 23:07:05 -- common/autotest_common.sh@10 -- # set +x 00:07:09.150 ************************************ 00:07:09.150 START TEST version 00:07:09.150 ************************************ 00:07:09.150 23:07:05 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:07:09.409 * Looking for test storage... 00:07:09.409 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:09.409 23:07:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.409 23:07:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.409 23:07:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.409 23:07:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.409 23:07:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.409 23:07:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.409 23:07:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.409 23:07:05 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.409 23:07:05 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.409 23:07:05 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.409 23:07:05 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.409 23:07:05 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.409 23:07:05 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.409 23:07:05 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.409 23:07:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.409 23:07:05 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.409 23:07:05 -- scripts/common.sh@344 -- # : 1 00:07:09.409 23:07:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.409 23:07:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.409 23:07:05 -- scripts/common.sh@364 -- # decimal 1 00:07:09.409 23:07:05 -- scripts/common.sh@352 -- # local d=1 00:07:09.409 23:07:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.409 23:07:05 -- scripts/common.sh@354 -- # echo 1 00:07:09.409 23:07:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.409 23:07:05 -- scripts/common.sh@365 -- # decimal 2 00:07:09.409 23:07:05 -- scripts/common.sh@352 -- # local d=2 00:07:09.409 23:07:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.409 23:07:05 -- scripts/common.sh@354 -- # echo 2 00:07:09.409 23:07:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.409 23:07:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.409 23:07:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.409 23:07:05 -- scripts/common.sh@367 -- # return 0 00:07:09.409 23:07:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.409 23:07:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.409 --rc genhtml_branch_coverage=1 00:07:09.409 --rc genhtml_function_coverage=1 00:07:09.409 --rc genhtml_legend=1 00:07:09.409 --rc geninfo_all_blocks=1 00:07:09.409 --rc geninfo_unexecuted_blocks=1 00:07:09.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.409 ' 00:07:09.409 23:07:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.409 --rc genhtml_branch_coverage=1 00:07:09.409 --rc genhtml_function_coverage=1 00:07:09.409 --rc genhtml_legend=1 00:07:09.409 --rc geninfo_all_blocks=1 00:07:09.409 --rc geninfo_unexecuted_blocks=1 00:07:09.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.409 ' 00:07:09.409 23:07:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.409 --rc genhtml_branch_coverage=1 00:07:09.409 --rc genhtml_function_coverage=1 00:07:09.409 --rc genhtml_legend=1 00:07:09.409 --rc geninfo_all_blocks=1 00:07:09.409 --rc geninfo_unexecuted_blocks=1 00:07:09.409 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.409 ' 00:07:09.409 23:07:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.409 --rc genhtml_branch_coverage=1 00:07:09.409 --rc genhtml_function_coverage=1 00:07:09.409 --rc genhtml_legend=1 00:07:09.410 --rc geninfo_all_blocks=1 00:07:09.410 --rc geninfo_unexecuted_blocks=1 00:07:09.410 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.410 ' 00:07:09.410 23:07:05 -- app/version.sh@17 -- # get_header_version major 00:07:09.410 23:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:09.410 23:07:05 -- app/version.sh@14 -- # cut -f2 00:07:09.410 23:07:05 -- app/version.sh@14 -- # tr -d '"' 00:07:09.410 23:07:05 -- app/version.sh@17 -- # major=24 00:07:09.410 23:07:05 -- app/version.sh@18 -- # get_header_version minor 00:07:09.410 23:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:09.410 23:07:05 -- app/version.sh@14 -- # cut -f2 00:07:09.410 23:07:05 -- app/version.sh@14 -- # tr -d '"' 00:07:09.410 23:07:05 -- app/version.sh@18 -- # minor=1 00:07:09.410 23:07:05 -- app/version.sh@19 -- # get_header_version patch 00:07:09.410 23:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:09.410 23:07:05 -- app/version.sh@14 -- # cut -f2 00:07:09.410 23:07:05 -- app/version.sh@14 -- # tr -d '"' 00:07:09.410 23:07:05 -- app/version.sh@19 -- # patch=1 00:07:09.410 23:07:05 -- app/version.sh@20 -- # get_header_version suffix 00:07:09.410 23:07:05 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:07:09.410 23:07:05 -- app/version.sh@14 -- # cut -f2 00:07:09.410 23:07:05 -- app/version.sh@14 -- # tr -d '"' 00:07:09.410 23:07:05 -- app/version.sh@20 -- # suffix=-pre 00:07:09.410 23:07:05 -- app/version.sh@22 -- # version=24.1 00:07:09.410 23:07:05 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:09.410 23:07:05 -- app/version.sh@25 -- # version=24.1.1 00:07:09.410 23:07:05 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:09.410 23:07:05 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:09.410 23:07:05 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:09.410 23:07:05 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:09.410 23:07:05 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:09.410 00:07:09.410 real 0m0.234s 00:07:09.410 user 0m0.128s 00:07:09.410 sys 0m0.158s 00:07:09.410 23:07:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.410 23:07:05 -- common/autotest_common.sh@10 -- # set +x 00:07:09.410 ************************************ 00:07:09.410 END TEST version 00:07:09.410 ************************************ 00:07:09.410 23:07:05 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:09.410 23:07:05 -- spdk/autotest.sh@191 -- # uname -s 00:07:09.410 23:07:05 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:09.410 23:07:06 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:09.410 23:07:06 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:09.410 23:07:06 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:07:09.410 23:07:06 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:07:09.410 23:07:06 -- spdk/autotest.sh@255 -- # timing_exit lib 00:07:09.410 23:07:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:09.410 23:07:06 -- common/autotest_common.sh@10 -- # set +x 00:07:09.670 23:07:06 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:07:09.670 23:07:06 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:07:09.670 23:07:06 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:07:09.670 23:07:06 -- spdk/autotest.sh@361 -- # [[ 1 -eq 1 ]] 00:07:09.670 23:07:06 -- spdk/autotest.sh@362 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:09.670 23:07:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.670 23:07:06 -- common/autotest_common.sh@10 -- # set +x 00:07:09.670 ************************************ 00:07:09.670 START TEST llvm_fuzz 00:07:09.670 ************************************ 00:07:09.670 23:07:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:07:09.670 * Looking for test storage... 00:07:09.670 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:07:09.670 23:07:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.670 23:07:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.670 23:07:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.670 23:07:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.670 23:07:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.670 23:07:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.670 23:07:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.670 23:07:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.670 23:07:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.670 23:07:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.670 23:07:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.670 23:07:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.670 23:07:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.670 23:07:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.670 23:07:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.670 23:07:06 -- scripts/common.sh@344 -- # : 1 00:07:09.670 23:07:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.670 23:07:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.670 23:07:06 -- scripts/common.sh@364 -- # decimal 1 00:07:09.670 23:07:06 -- scripts/common.sh@352 -- # local d=1 00:07:09.670 23:07:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.670 23:07:06 -- scripts/common.sh@354 -- # echo 1 00:07:09.670 23:07:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.670 23:07:06 -- scripts/common.sh@365 -- # decimal 2 00:07:09.670 23:07:06 -- scripts/common.sh@352 -- # local d=2 00:07:09.670 23:07:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.670 23:07:06 -- scripts/common.sh@354 -- # echo 2 00:07:09.670 23:07:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.670 23:07:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.670 23:07:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.670 23:07:06 -- scripts/common.sh@367 -- # return 0 00:07:09.670 23:07:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.670 --rc genhtml_branch_coverage=1 00:07:09.670 --rc genhtml_function_coverage=1 00:07:09.670 --rc genhtml_legend=1 00:07:09.670 --rc geninfo_all_blocks=1 00:07:09.670 --rc geninfo_unexecuted_blocks=1 00:07:09.670 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.670 ' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.670 --rc genhtml_branch_coverage=1 00:07:09.670 --rc genhtml_function_coverage=1 00:07:09.670 --rc genhtml_legend=1 00:07:09.670 --rc geninfo_all_blocks=1 00:07:09.670 --rc geninfo_unexecuted_blocks=1 00:07:09.670 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.670 ' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.670 --rc genhtml_branch_coverage=1 00:07:09.670 --rc genhtml_function_coverage=1 00:07:09.670 --rc genhtml_legend=1 00:07:09.670 --rc geninfo_all_blocks=1 00:07:09.670 --rc geninfo_unexecuted_blocks=1 00:07:09.670 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.670 ' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.670 --rc genhtml_branch_coverage=1 00:07:09.670 --rc genhtml_function_coverage=1 00:07:09.670 --rc genhtml_legend=1 00:07:09.670 --rc geninfo_all_blocks=1 00:07:09.670 --rc geninfo_unexecuted_blocks=1 00:07:09.670 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.670 ' 00:07:09.670 23:07:06 -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:07:09.670 23:07:06 -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:07:09.670 23:07:06 -- common/autotest_common.sh@548 -- # fuzzers=() 00:07:09.670 23:07:06 -- common/autotest_common.sh@548 -- # local fuzzers 00:07:09.670 23:07:06 -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:07:09.670 23:07:06 -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:07:09.670 23:07:06 -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:07:09.670 23:07:06 -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:07:09.670 23:07:06 -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:07:09.670 23:07:06 -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/coverage 00:07:09.670 23:07:06 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:09.670 23:07:06 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:09.670 23:07:06 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:09.670 23:07:06 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:09.670 23:07:06 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:07:09.670 23:07:06 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:07:09.670 23:07:06 -- fuzz/llvm.sh@19 -- # run_test nvmf_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:09.670 23:07:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.670 23:07:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.670 23:07:06 -- common/autotest_common.sh@10 -- # set +x 00:07:09.670 ************************************ 00:07:09.670 START TEST nvmf_fuzz 00:07:09.670 ************************************ 00:07:09.670 23:07:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:07:09.931 * Looking for test storage... 00:07:09.931 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:09.931 23:07:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.931 23:07:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.931 23:07:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:09.931 23:07:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:09.931 23:07:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:09.931 23:07:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:09.931 23:07:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:09.931 23:07:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:09.931 23:07:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:09.931 23:07:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.931 23:07:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:09.931 23:07:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:09.931 23:07:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:09.931 23:07:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:09.931 23:07:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:09.931 23:07:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:09.931 23:07:06 -- scripts/common.sh@344 -- # : 1 00:07:09.931 23:07:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:09.931 23:07:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.931 23:07:06 -- scripts/common.sh@364 -- # decimal 1 00:07:09.931 23:07:06 -- scripts/common.sh@352 -- # local d=1 00:07:09.931 23:07:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.931 23:07:06 -- scripts/common.sh@354 -- # echo 1 00:07:09.931 23:07:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:09.931 23:07:06 -- scripts/common.sh@365 -- # decimal 2 00:07:09.931 23:07:06 -- scripts/common.sh@352 -- # local d=2 00:07:09.931 23:07:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.931 23:07:06 -- scripts/common.sh@354 -- # echo 2 00:07:09.931 23:07:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:09.931 23:07:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:09.931 23:07:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:09.931 23:07:06 -- scripts/common.sh@367 -- # return 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.932 23:07:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:09.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.932 --rc genhtml_branch_coverage=1 00:07:09.932 --rc genhtml_function_coverage=1 00:07:09.932 --rc genhtml_legend=1 00:07:09.932 --rc geninfo_all_blocks=1 00:07:09.932 --rc geninfo_unexecuted_blocks=1 00:07:09.932 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.932 ' 00:07:09.932 23:07:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:09.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.932 --rc genhtml_branch_coverage=1 00:07:09.932 --rc genhtml_function_coverage=1 00:07:09.932 --rc genhtml_legend=1 00:07:09.932 --rc geninfo_all_blocks=1 00:07:09.932 --rc geninfo_unexecuted_blocks=1 00:07:09.932 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.932 ' 00:07:09.932 23:07:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:09.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.932 --rc genhtml_branch_coverage=1 00:07:09.932 --rc genhtml_function_coverage=1 00:07:09.932 --rc genhtml_legend=1 00:07:09.932 --rc geninfo_all_blocks=1 00:07:09.932 --rc geninfo_unexecuted_blocks=1 00:07:09.932 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.932 ' 00:07:09.932 23:07:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:09.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.932 --rc genhtml_branch_coverage=1 00:07:09.932 --rc genhtml_function_coverage=1 00:07:09.932 --rc genhtml_legend=1 00:07:09.932 --rc geninfo_all_blocks=1 00:07:09.932 --rc geninfo_unexecuted_blocks=1 00:07:09.932 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:09.932 ' 00:07:09.932 23:07:06 -- nvmf/run.sh@52 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:07:09.932 23:07:06 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:07:09.932 23:07:06 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:09.932 23:07:06 -- common/autotest_common.sh@34 -- # set -e 00:07:09.932 23:07:06 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:09.932 23:07:06 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:09.932 23:07:06 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:09.932 23:07:06 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:07:09.932 23:07:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:09.932 23:07:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:09.932 23:07:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:09.932 23:07:06 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:09.932 23:07:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:09.932 23:07:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:09.932 23:07:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:09.932 23:07:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:09.932 23:07:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:09.932 23:07:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:09.932 23:07:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:09.932 23:07:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:09.932 23:07:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:09.932 23:07:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:09.932 23:07:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:09.932 23:07:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:09.932 23:07:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:09.932 23:07:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:09.932 23:07:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:09.932 23:07:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:09.932 23:07:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:09.932 23:07:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:09.932 23:07:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:09.932 23:07:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:09.932 23:07:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:09.932 23:07:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:09.932 23:07:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:09.932 23:07:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:09.932 23:07:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:09.932 23:07:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:09.932 23:07:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:09.932 23:07:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:07:09.932 23:07:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:09.932 23:07:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:09.932 23:07:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:09.932 23:07:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:09.932 23:07:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:09.932 23:07:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:09.932 23:07:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:09.932 23:07:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:09.932 23:07:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:09.932 23:07:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:09.932 23:07:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:09.932 23:07:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:09.932 23:07:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:09.932 23:07:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:09.932 23:07:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:09.932 23:07:06 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:09.932 23:07:06 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:09.932 23:07:06 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:09.932 23:07:06 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:09.932 23:07:06 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:09.932 23:07:06 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:09.932 23:07:06 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:09.932 23:07:06 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:07:09.932 23:07:06 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:09.932 23:07:06 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:09.932 23:07:06 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:07:09.932 23:07:06 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:09.932 23:07:06 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:09.932 23:07:06 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:09.932 23:07:06 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:09.932 23:07:06 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:09.932 23:07:06 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:09.932 23:07:06 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:09.932 23:07:06 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:09.932 23:07:06 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:09.932 23:07:06 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:09.932 23:07:06 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:09.932 23:07:06 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:09.932 23:07:06 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:09.932 23:07:06 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:09.932 23:07:06 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:09.932 23:07:06 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:09.932 23:07:06 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:07:09.932 23:07:06 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:09.932 23:07:06 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:07:09.932 23:07:06 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:09.932 23:07:06 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:09.932 23:07:06 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:07:09.932 23:07:06 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:09.932 23:07:06 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:09.932 23:07:06 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:09.932 23:07:06 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:09.932 23:07:06 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:09.932 23:07:06 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:09.932 23:07:06 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:09.932 23:07:06 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:07:09.932 23:07:06 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:09.932 #define SPDK_CONFIG_H 00:07:09.932 #define SPDK_CONFIG_APPS 1 00:07:09.932 #define SPDK_CONFIG_ARCH native 00:07:09.932 #undef SPDK_CONFIG_ASAN 00:07:09.932 #undef SPDK_CONFIG_AVAHI 00:07:09.932 #undef SPDK_CONFIG_CET 00:07:09.932 #define SPDK_CONFIG_COVERAGE 1 00:07:09.932 #define SPDK_CONFIG_CROSS_PREFIX 00:07:09.932 #undef SPDK_CONFIG_CRYPTO 00:07:09.932 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:09.932 #undef SPDK_CONFIG_CUSTOMOCF 00:07:09.932 #undef SPDK_CONFIG_DAOS 00:07:09.932 #define SPDK_CONFIG_DAOS_DIR 00:07:09.932 #define SPDK_CONFIG_DEBUG 1 00:07:09.932 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:09.932 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:07:09.932 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:09.932 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:09.932 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:09.932 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:07:09.932 #define SPDK_CONFIG_EXAMPLES 1 00:07:09.932 #undef SPDK_CONFIG_FC 00:07:09.932 #define SPDK_CONFIG_FC_PATH 00:07:09.932 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:09.932 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:09.932 #undef SPDK_CONFIG_FUSE 00:07:09.932 #define SPDK_CONFIG_FUZZER 1 00:07:09.932 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:07:09.932 #undef SPDK_CONFIG_GOLANG 00:07:09.932 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:09.932 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:09.932 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:09.932 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:09.932 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:09.932 #define SPDK_CONFIG_IDXD 1 00:07:09.932 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:09.932 #undef SPDK_CONFIG_IPSEC_MB 00:07:09.932 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:09.932 #define SPDK_CONFIG_ISAL 1 00:07:09.932 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:09.932 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:09.932 #define SPDK_CONFIG_LIBDIR 00:07:09.932 #undef SPDK_CONFIG_LTO 00:07:09.932 #define SPDK_CONFIG_MAX_LCORES 00:07:09.932 #define SPDK_CONFIG_NVME_CUSE 1 00:07:09.932 #undef SPDK_CONFIG_OCF 00:07:09.932 #define SPDK_CONFIG_OCF_PATH 00:07:09.932 #define SPDK_CONFIG_OPENSSL_PATH 00:07:09.932 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:09.932 #undef SPDK_CONFIG_PGO_USE 00:07:09.932 #define SPDK_CONFIG_PREFIX /usr/local 00:07:09.932 #undef SPDK_CONFIG_RAID5F 00:07:09.932 #undef SPDK_CONFIG_RBD 00:07:09.932 #define SPDK_CONFIG_RDMA 1 00:07:09.932 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:09.932 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:09.932 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:09.932 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:09.932 #undef SPDK_CONFIG_SHARED 00:07:09.932 #undef SPDK_CONFIG_SMA 00:07:09.932 #define SPDK_CONFIG_TESTS 1 00:07:09.932 #undef SPDK_CONFIG_TSAN 00:07:09.932 #define SPDK_CONFIG_UBLK 1 00:07:09.932 #define SPDK_CONFIG_UBSAN 1 00:07:09.932 #undef SPDK_CONFIG_UNIT_TESTS 00:07:09.932 #undef SPDK_CONFIG_URING 00:07:09.932 #define SPDK_CONFIG_URING_PATH 00:07:09.932 #undef SPDK_CONFIG_URING_ZNS 00:07:09.932 #undef SPDK_CONFIG_USDT 00:07:09.932 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:09.932 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:09.932 #define SPDK_CONFIG_VFIO_USER 1 00:07:09.932 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:09.932 #define SPDK_CONFIG_VHOST 1 00:07:09.932 #define SPDK_CONFIG_VIRTIO 1 00:07:09.932 #undef SPDK_CONFIG_VTUNE 00:07:09.932 #define SPDK_CONFIG_VTUNE_DIR 00:07:09.932 #define SPDK_CONFIG_WERROR 1 00:07:09.932 #define SPDK_CONFIG_WPDK_DIR 00:07:09.932 #undef SPDK_CONFIG_XNVME 00:07:09.932 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:09.932 23:07:06 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:09.932 23:07:06 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:07:09.932 23:07:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.932 23:07:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.932 23:07:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.932 23:07:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.932 23:07:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.932 23:07:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.932 23:07:06 -- paths/export.sh@5 -- # export PATH 00:07:09.932 23:07:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.932 23:07:06 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:09.932 23:07:06 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:07:09.932 23:07:06 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:09.932 23:07:06 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:07:09.932 23:07:06 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:09.932 23:07:06 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:07:09.932 23:07:06 -- pm/common@16 -- # TEST_TAG=N/A 00:07:09.932 23:07:06 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:07:09.932 23:07:06 -- common/autotest_common.sh@52 -- # : 1 00:07:09.932 23:07:06 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:09.932 23:07:06 -- common/autotest_common.sh@56 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:09.932 23:07:06 -- common/autotest_common.sh@58 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:09.932 23:07:06 -- common/autotest_common.sh@60 -- # : 1 00:07:09.932 23:07:06 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:09.932 23:07:06 -- common/autotest_common.sh@62 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:09.932 23:07:06 -- common/autotest_common.sh@64 -- # : 00:07:09.932 23:07:06 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:09.932 23:07:06 -- common/autotest_common.sh@66 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:09.932 23:07:06 -- common/autotest_common.sh@68 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:09.932 23:07:06 -- common/autotest_common.sh@70 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:09.932 23:07:06 -- common/autotest_common.sh@72 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:09.932 23:07:06 -- common/autotest_common.sh@74 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:09.932 23:07:06 -- common/autotest_common.sh@76 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:09.932 23:07:06 -- common/autotest_common.sh@78 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:09.932 23:07:06 -- common/autotest_common.sh@80 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:09.932 23:07:06 -- common/autotest_common.sh@82 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:09.932 23:07:06 -- common/autotest_common.sh@84 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:09.932 23:07:06 -- common/autotest_common.sh@86 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:09.932 23:07:06 -- common/autotest_common.sh@88 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:09.932 23:07:06 -- common/autotest_common.sh@90 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:09.932 23:07:06 -- common/autotest_common.sh@92 -- # : 1 00:07:09.932 23:07:06 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:09.932 23:07:06 -- common/autotest_common.sh@94 -- # : 1 00:07:09.932 23:07:06 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:09.932 23:07:06 -- common/autotest_common.sh@96 -- # : rdma 00:07:09.932 23:07:06 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:09.932 23:07:06 -- common/autotest_common.sh@98 -- # : 0 00:07:09.932 23:07:06 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:09.933 23:07:06 -- common/autotest_common.sh@100 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:09.933 23:07:06 -- common/autotest_common.sh@102 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:09.933 23:07:06 -- common/autotest_common.sh@104 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:09.933 23:07:06 -- common/autotest_common.sh@106 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:09.933 23:07:06 -- common/autotest_common.sh@108 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:09.933 23:07:06 -- common/autotest_common.sh@110 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:09.933 23:07:06 -- common/autotest_common.sh@112 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:09.933 23:07:06 -- common/autotest_common.sh@114 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:09.933 23:07:06 -- common/autotest_common.sh@116 -- # : 1 00:07:09.933 23:07:06 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:09.933 23:07:06 -- common/autotest_common.sh@118 -- # : 00:07:09.933 23:07:06 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:09.933 23:07:06 -- common/autotest_common.sh@120 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:09.933 23:07:06 -- common/autotest_common.sh@122 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:09.933 23:07:06 -- common/autotest_common.sh@124 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:09.933 23:07:06 -- common/autotest_common.sh@126 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:09.933 23:07:06 -- common/autotest_common.sh@128 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:09.933 23:07:06 -- common/autotest_common.sh@130 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:09.933 23:07:06 -- common/autotest_common.sh@132 -- # : 00:07:09.933 23:07:06 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:09.933 23:07:06 -- common/autotest_common.sh@134 -- # : true 00:07:09.933 23:07:06 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:09.933 23:07:06 -- common/autotest_common.sh@136 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:09.933 23:07:06 -- common/autotest_common.sh@138 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:09.933 23:07:06 -- common/autotest_common.sh@140 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:09.933 23:07:06 -- common/autotest_common.sh@142 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:09.933 23:07:06 -- common/autotest_common.sh@144 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:09.933 23:07:06 -- common/autotest_common.sh@146 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:09.933 23:07:06 -- common/autotest_common.sh@148 -- # : 00:07:09.933 23:07:06 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:09.933 23:07:06 -- common/autotest_common.sh@150 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:09.933 23:07:06 -- common/autotest_common.sh@152 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:09.933 23:07:06 -- common/autotest_common.sh@154 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:09.933 23:07:06 -- common/autotest_common.sh@156 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:09.933 23:07:06 -- common/autotest_common.sh@158 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:09.933 23:07:06 -- common/autotest_common.sh@160 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:09.933 23:07:06 -- common/autotest_common.sh@163 -- # : 00:07:09.933 23:07:06 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:09.933 23:07:06 -- common/autotest_common.sh@165 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:09.933 23:07:06 -- common/autotest_common.sh@167 -- # : 0 00:07:09.933 23:07:06 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:09.933 23:07:06 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:09.933 23:07:06 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.933 23:07:06 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:09.933 23:07:06 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:09.933 23:07:06 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:07:09.933 23:07:06 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:09.933 23:07:06 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:09.933 23:07:06 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.933 23:07:06 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:09.933 23:07:06 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.933 23:07:06 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:09.933 23:07:06 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:09.933 23:07:06 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:09.933 23:07:06 -- common/autotest_common.sh@196 -- # cat 00:07:09.933 23:07:06 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:09.933 23:07:06 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.933 23:07:06 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:09.933 23:07:06 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.933 23:07:06 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:09.933 23:07:06 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:09.933 23:07:06 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:09.933 23:07:06 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:09.933 23:07:06 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:07:09.933 23:07:06 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:09.933 23:07:06 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:07:09.933 23:07:06 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.933 23:07:06 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:09.933 23:07:06 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.933 23:07:06 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:09.933 23:07:06 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:09.933 23:07:06 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:09.933 23:07:06 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.933 23:07:06 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:09.933 23:07:06 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:07:09.933 23:07:06 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:07:09.933 23:07:06 -- common/autotest_common.sh@249 -- # _LCOV= 00:07:09.933 23:07:06 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@250 -- # [[ 1 -eq 1 ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@250 -- # _LCOV=1 00:07:09.933 23:07:06 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:09.933 23:07:06 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:07:09.933 23:07:06 -- common/autotest_common.sh@255 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:07:09.933 23:07:06 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:07:09.933 23:07:06 -- common/autotest_common.sh@259 -- # export valgrind= 00:07:09.933 23:07:06 -- common/autotest_common.sh@259 -- # valgrind= 00:07:09.933 23:07:06 -- common/autotest_common.sh@265 -- # uname -s 00:07:09.933 23:07:06 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:07:09.933 23:07:06 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:07:09.933 23:07:06 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:07:09.933 23:07:06 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:07:09.933 23:07:06 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@275 -- # MAKE=make 00:07:09.933 23:07:06 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:07:09.933 23:07:06 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:07:09.933 23:07:06 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:07:09.933 23:07:06 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:07:09.933 23:07:06 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:07:09.933 23:07:06 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:07:09.933 23:07:06 -- common/autotest_common.sh@319 -- # [[ -z 1296478 ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@319 -- # kill -0 1296478 00:07:09.933 23:07:06 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:07:09.933 23:07:06 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:07:09.933 23:07:06 -- common/autotest_common.sh@332 -- # local mount target_dir 00:07:09.933 23:07:06 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:07:09.933 23:07:06 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:07:09.933 23:07:06 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:07:09.933 23:07:06 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:07:09.933 23:07:06 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.7Z6R7V 00:07:09.933 23:07:06 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:09.933 23:07:06 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.7Z6R7V/tests/nvmf /tmp/spdk.7Z6R7V 00:07:09.933 23:07:06 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@328 -- # df -T 00:07:09.933 23:07:06 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=54453358592 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730574336 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=7277215744 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864027648 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865285120 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=12340117504 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346118144 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=6000640 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865088512 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865289216 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=200704 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:07:09.933 23:07:06 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:07:09.933 23:07:06 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:07:09.933 23:07:06 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:07:09.933 23:07:06 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:07:09.933 * Looking for test storage... 00:07:09.933 23:07:06 -- common/autotest_common.sh@369 -- # local target_space new_size 00:07:09.933 23:07:06 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:07:09.933 23:07:06 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:09.933 23:07:06 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:09.933 23:07:06 -- common/autotest_common.sh@373 -- # mount=/ 00:07:09.933 23:07:06 -- common/autotest_common.sh@375 -- # target_space=54453358592 00:07:09.933 23:07:06 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:07:09.933 23:07:06 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:07:09.933 23:07:06 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:07:09.933 23:07:06 -- common/autotest_common.sh@382 -- # new_size=9491808256 00:07:09.933 23:07:06 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:09.933 23:07:06 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:09.933 23:07:06 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:09.933 23:07:06 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:09.934 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:07:09.934 23:07:06 -- common/autotest_common.sh@390 -- # return 0 00:07:09.934 23:07:06 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:07:09.934 23:07:06 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:07:09.934 23:07:06 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:09.934 23:07:06 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:09.934 23:07:06 -- common/autotest_common.sh@1682 -- # true 00:07:09.934 23:07:06 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:07:09.934 23:07:06 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:09.934 23:07:06 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:09.934 23:07:06 -- common/autotest_common.sh@27 -- # exec 00:07:09.934 23:07:06 -- common/autotest_common.sh@29 -- # exec 00:07:09.934 23:07:06 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:09.934 23:07:06 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:09.934 23:07:06 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:09.934 23:07:06 -- common/autotest_common.sh@18 -- # set -x 00:07:09.934 23:07:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:09.934 23:07:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:09.934 23:07:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.193 23:07:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.194 23:07:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.194 23:07:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.194 23:07:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.194 23:07:06 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.194 23:07:06 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.194 23:07:06 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.194 23:07:06 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.194 23:07:06 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.194 23:07:06 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.194 23:07:06 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.194 23:07:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.194 23:07:06 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.194 23:07:06 -- scripts/common.sh@344 -- # : 1 00:07:10.194 23:07:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.194 23:07:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.194 23:07:06 -- scripts/common.sh@364 -- # decimal 1 00:07:10.194 23:07:06 -- scripts/common.sh@352 -- # local d=1 00:07:10.194 23:07:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.194 23:07:06 -- scripts/common.sh@354 -- # echo 1 00:07:10.194 23:07:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:10.194 23:07:06 -- scripts/common.sh@365 -- # decimal 2 00:07:10.194 23:07:06 -- scripts/common.sh@352 -- # local d=2 00:07:10.194 23:07:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.194 23:07:06 -- scripts/common.sh@354 -- # echo 2 00:07:10.194 23:07:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:10.194 23:07:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:10.194 23:07:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:10.194 23:07:06 -- scripts/common.sh@367 -- # return 0 00:07:10.194 23:07:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.194 23:07:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:10.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.194 --rc genhtml_branch_coverage=1 00:07:10.194 --rc genhtml_function_coverage=1 00:07:10.194 --rc genhtml_legend=1 00:07:10.194 --rc geninfo_all_blocks=1 00:07:10.194 --rc geninfo_unexecuted_blocks=1 00:07:10.194 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.194 ' 00:07:10.194 23:07:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:10.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.194 --rc genhtml_branch_coverage=1 00:07:10.194 --rc genhtml_function_coverage=1 00:07:10.194 --rc genhtml_legend=1 00:07:10.194 --rc geninfo_all_blocks=1 00:07:10.194 --rc geninfo_unexecuted_blocks=1 00:07:10.194 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.194 ' 00:07:10.194 23:07:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:10.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.194 --rc genhtml_branch_coverage=1 00:07:10.194 --rc genhtml_function_coverage=1 00:07:10.194 --rc genhtml_legend=1 00:07:10.194 --rc geninfo_all_blocks=1 00:07:10.194 --rc geninfo_unexecuted_blocks=1 00:07:10.194 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.194 ' 00:07:10.194 23:07:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:10.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.194 --rc genhtml_branch_coverage=1 00:07:10.194 --rc genhtml_function_coverage=1 00:07:10.194 --rc genhtml_legend=1 00:07:10.194 --rc geninfo_all_blocks=1 00:07:10.194 --rc geninfo_unexecuted_blocks=1 00:07:10.194 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:07:10.194 ' 00:07:10.194 23:07:06 -- nvmf/run.sh@53 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:07:10.194 23:07:06 -- ../common.sh@8 -- # pids=() 00:07:10.194 23:07:06 -- nvmf/run.sh@55 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:10.194 23:07:06 -- nvmf/run.sh@56 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:07:10.194 23:07:06 -- nvmf/run.sh@56 -- # fuzz_num=25 00:07:10.194 23:07:06 -- nvmf/run.sh@57 -- # (( fuzz_num != 0 )) 00:07:10.194 23:07:06 -- nvmf/run.sh@59 -- # trap 'cleanup /tmp/llvm_fuzz*; exit 1' SIGINT SIGTERM EXIT 00:07:10.194 23:07:06 -- nvmf/run.sh@61 -- # mem_size=512 00:07:10.194 23:07:06 -- nvmf/run.sh@62 -- # [[ 1 -eq 1 ]] 00:07:10.194 23:07:06 -- nvmf/run.sh@63 -- # start_llvm_fuzz_short 25 1 00:07:10.194 23:07:06 -- ../common.sh@69 -- # local fuzz_num=25 00:07:10.194 23:07:06 -- ../common.sh@70 -- # local time=1 00:07:10.194 23:07:06 -- ../common.sh@72 -- # (( i = 0 )) 00:07:10.194 23:07:06 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:10.194 23:07:06 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:07:10.194 23:07:06 -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:07:10.194 23:07:06 -- nvmf/run.sh@24 -- # local timen=1 00:07:10.194 23:07:06 -- nvmf/run.sh@25 -- # local core=0x1 00:07:10.194 23:07:06 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:10.194 23:07:06 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:07:10.194 23:07:06 -- nvmf/run.sh@29 -- # printf %02d 0 00:07:10.194 23:07:06 -- nvmf/run.sh@29 -- # port=4400 00:07:10.194 23:07:06 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:10.194 23:07:06 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:07:10.194 23:07:06 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:10.194 23:07:06 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 -r /var/tmp/spdk0.sock 00:07:10.194 [2024-11-17 23:07:06.663591] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:10.194 [2024-11-17 23:07:06.663655] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296593 ] 00:07:10.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.514 [2024-11-17 23:07:06.840348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.514 [2024-11-17 23:07:06.910297] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.514 [2024-11-17 23:07:06.910432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.514 [2024-11-17 23:07:06.968910] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.514 [2024-11-17 23:07:06.985239] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:07:10.514 INFO: Running with entropic power schedule (0xFF, 100). 00:07:10.514 INFO: Seed: 3500169277 00:07:10.514 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:10.514 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:10.514 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:07:10.514 INFO: A corpus is not provided, starting from an empty corpus 00:07:10.514 #2 INITED exec/s: 0 rss: 60Mb 00:07:10.514 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:10.514 This may also happen if the target rejected all inputs we tried so far 00:07:10.514 [2024-11-17 23:07:07.061195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:10.514 [2024-11-17 23:07:07.061230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:10.773 NEW_FUNC[1/670]: 0x43a858 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:07:10.773 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:10.773 #6 NEW cov: 11530 ft: 11535 corp: 2/65b lim: 320 exec/s: 0 rss: 68Mb L: 64/64 MS: 4 ChangeByte-ChangeBit-ChangeBit-InsertRepeatedBytes- 00:07:10.773 [2024-11-17 23:07:07.382316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:10.773 [2024-11-17 23:07:07.382353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.031 #7 NEW cov: 11659 ft: 12098 corp: 3/130b lim: 320 exec/s: 0 rss: 68Mb L: 65/65 MS: 1 InsertByte- 00:07:11.031 [2024-11-17 23:07:07.442389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:8b7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.032 [2024-11-17 23:07:07.442420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.032 #8 NEW cov: 11665 ft: 12424 corp: 4/214b lim: 320 exec/s: 0 rss: 68Mb L: 84/84 MS: 1 CrossOver- 00:07:11.032 [2024-11-17 23:07:07.492610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.032 [2024-11-17 23:07:07.492638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.032 #9 NEW cov: 11750 ft: 12725 corp: 5/278b lim: 320 exec/s: 0 rss: 68Mb L: 64/84 MS: 1 ChangeBit- 00:07:11.032 [2024-11-17 23:07:07.542941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.032 [2024-11-17 23:07:07.542968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.032 [2024-11-17 23:07:07.543096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:7e7e7e7e 00:07:11.032 [2024-11-17 23:07:07.543115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.032 #15 NEW cov: 11752 ft: 13003 corp: 6/414b lim: 320 exec/s: 0 rss: 68Mb L: 136/136 MS: 1 InsertRepeatedBytes- 00:07:11.032 [2024-11-17 23:07:07.592841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.032 [2024-11-17 23:07:07.592868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.032 #16 NEW cov: 11752 ft: 13034 corp: 7/484b lim: 320 exec/s: 0 rss: 68Mb L: 70/136 MS: 1 CopyPart- 00:07:11.032 [2024-11-17 23:07:07.643039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e887e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.032 [2024-11-17 23:07:07.643066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.291 #17 NEW cov: 11752 ft: 13098 corp: 8/549b lim: 320 exec/s: 0 rss: 68Mb L: 65/136 MS: 1 ChangeBinInt- 00:07:11.291 [2024-11-17 23:07:07.693190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.291 [2024-11-17 23:07:07.693216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.291 #18 NEW cov: 11752 ft: 13133 corp: 9/614b lim: 320 exec/s: 0 rss: 68Mb L: 65/136 MS: 1 ChangeByte- 00:07:11.291 [2024-11-17 23:07:07.743268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:81817c7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.291 [2024-11-17 23:07:07.743296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.291 #19 NEW cov: 11752 ft: 13160 corp: 10/679b lim: 320 exec/s: 0 rss: 68Mb L: 65/136 MS: 1 ChangeBinInt- 00:07:11.291 [2024-11-17 23:07:07.793447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.291 [2024-11-17 23:07:07.793475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.291 #20 NEW cov: 11752 ft: 13179 corp: 11/744b lim: 320 exec/s: 0 rss: 68Mb L: 65/136 MS: 1 ShuffleBytes- 00:07:11.291 [2024-11-17 23:07:07.843615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:6e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.291 [2024-11-17 23:07:07.843645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.291 #26 NEW cov: 11752 ft: 13196 corp: 12/809b lim: 320 exec/s: 0 rss: 68Mb L: 65/136 MS: 1 ChangeBit- 00:07:11.291 [2024-11-17 23:07:07.894079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.291 [2024-11-17 23:07:07.894105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.291 [2024-11-17 23:07:07.894243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:7e7e7e7e 00:07:11.291 [2024-11-17 23:07:07.894261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.550 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:11.550 #27 NEW cov: 11775 ft: 13230 corp: 13/945b lim: 320 exec/s: 0 rss: 68Mb L: 136/136 MS: 1 ChangeBit- 00:07:11.550 [2024-11-17 23:07:07.954260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:11.550 [2024-11-17 23:07:07.954289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.550 [2024-11-17 23:07:07.954454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:11.550 [2024-11-17 23:07:07.954470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.550 NEW_FUNC[1/1]: 0x12c5228 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2016 00:07:11.550 #28 NEW cov: 11806 ft: 13321 corp: 14/1127b lim: 320 exec/s: 0 rss: 68Mb L: 182/182 MS: 1 InsertRepeatedBytes- 00:07:11.550 [2024-11-17 23:07:08.004242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.550 [2024-11-17 23:07:08.004271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.550 #29 NEW cov: 11806 ft: 13352 corp: 15/1203b lim: 320 exec/s: 29 rss: 69Mb L: 76/182 MS: 1 CopyPart- 00:07:11.550 [2024-11-17 23:07:08.064503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.550 [2024-11-17 23:07:08.064536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.550 #30 NEW cov: 11806 ft: 13398 corp: 16/1268b lim: 320 exec/s: 30 rss: 69Mb L: 65/182 MS: 1 InsertByte- 00:07:11.550 [2024-11-17 23:07:08.114584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.550 [2024-11-17 23:07:08.114625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.550 #31 NEW cov: 11806 ft: 13429 corp: 17/1333b lim: 320 exec/s: 31 rss: 69Mb L: 65/182 MS: 1 CopyPart- 00:07:11.809 [2024-11-17 23:07:08.165243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:11.809 [2024-11-17 23:07:08.165270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.809 [2024-11-17 23:07:08.165388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:7e7e7e00 00:07:11.809 [2024-11-17 23:07:08.165411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:11.809 [2024-11-17 23:07:08.165531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:11.809 [2024-11-17 23:07:08.165554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:11.809 #32 NEW cov: 11806 ft: 13592 corp: 18/1555b lim: 320 exec/s: 32 rss: 69Mb L: 222/222 MS: 1 InsertRepeatedBytes- 00:07:11.809 [2024-11-17 23:07:08.214934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.809 [2024-11-17 23:07:08.214963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.809 #38 NEW cov: 11806 ft: 13617 corp: 19/1619b lim: 320 exec/s: 38 rss: 69Mb L: 64/222 MS: 1 CMP- DE: "\276f\343\346G\017\213\000"- 00:07:11.809 [2024-11-17 23:07:08.274785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.809 [2024-11-17 23:07:08.274814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.809 #39 NEW cov: 11806 ft: 13673 corp: 20/1684b lim: 320 exec/s: 39 rss: 69Mb L: 65/222 MS: 1 ChangeBinInt- 00:07:11.809 [2024-11-17 23:07:08.335298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:11.809 [2024-11-17 23:07:08.335326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.809 #40 NEW cov: 11806 ft: 13718 corp: 21/1754b lim: 320 exec/s: 40 rss: 69Mb L: 70/222 MS: 1 ChangeByte- 00:07:11.809 [2024-11-17 23:07:08.395687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:11.809 [2024-11-17 23:07:08.395715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:11.809 [2024-11-17 23:07:08.395845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:7e7e7e7e 00:07:11.809 [2024-11-17 23:07:08.395862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:12.068 #41 NEW cov: 11806 ft: 13730 corp: 22/1890b lim: 320 exec/s: 41 rss: 69Mb L: 136/222 MS: 1 ShuffleBytes- 00:07:12.068 [2024-11-17 23:07:08.455726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:12.068 [2024-11-17 23:07:08.455753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.068 #42 NEW cov: 11806 ft: 13732 corp: 23/1955b lim: 320 exec/s: 42 rss: 69Mb L: 65/222 MS: 1 InsertByte- 00:07:12.069 [2024-11-17 23:07:08.515958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:12.069 [2024-11-17 23:07:08.515986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.069 #43 NEW cov: 11806 ft: 13762 corp: 24/2059b lim: 320 exec/s: 43 rss: 69Mb L: 104/222 MS: 1 InsertRepeatedBytes- 00:07:12.069 [2024-11-17 23:07:08.576065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:07:12.069 [2024-11-17 23:07:08.576092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.069 #44 NEW cov: 11806 ft: 13765 corp: 25/2163b lim: 320 exec/s: 44 rss: 69Mb L: 104/222 MS: 1 ShuffleBytes- 00:07:12.069 [2024-11-17 23:07:08.626212] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e237e7e7e 00:07:12.069 [2024-11-17 23:07:08.626239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.069 #45 NEW cov: 11806 ft: 13779 corp: 26/2228b lim: 320 exec/s: 45 rss: 69Mb L: 65/222 MS: 1 ChangeByte- 00:07:12.069 [2024-11-17 23:07:08.676348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:12.069 [2024-11-17 23:07:08.676373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.328 #46 NEW cov: 11806 ft: 13782 corp: 27/2293b lim: 320 exec/s: 46 rss: 69Mb L: 65/222 MS: 1 ShuffleBytes- 00:07:12.328 [2024-11-17 23:07:08.726452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:12.328 [2024-11-17 23:07:08.726479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.328 #47 NEW cov: 11806 ft: 13790 corp: 28/2359b lim: 320 exec/s: 47 rss: 69Mb L: 66/222 MS: 1 InsertByte- 00:07:12.328 [2024-11-17 23:07:08.776732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:12.328 [2024-11-17 23:07:08.776757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.328 #48 NEW cov: 11806 ft: 13798 corp: 29/2435b lim: 320 exec/s: 48 rss: 69Mb L: 76/222 MS: 1 CMP- DE: "\377\377\377\013"- 00:07:12.328 [2024-11-17 23:07:08.826810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e8b7e7e7e7e 00:07:12.328 [2024-11-17 23:07:08.826836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.328 #54 NEW cov: 11806 ft: 13818 corp: 30/2511b lim: 320 exec/s: 54 rss: 70Mb L: 76/222 MS: 1 ShuffleBytes- 00:07:12.328 [2024-11-17 23:07:08.876934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.328 [2024-11-17 23:07:08.876960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.328 #56 NEW cov: 11806 ft: 13839 corp: 31/2628b lim: 320 exec/s: 56 rss: 70Mb L: 117/222 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:12.328 [2024-11-17 23:07:08.927051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:ffffffff cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:12.328 [2024-11-17 23:07:08.927080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.588 #57 NEW cov: 11806 ft: 13882 corp: 32/2721b lim: 320 exec/s: 57 rss: 70Mb L: 93/222 MS: 1 InsertRepeatedBytes- 00:07:12.588 [2024-11-17 23:07:08.977230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:ffffffff cdw10:7e7e7e7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:12.588 [2024-11-17 23:07:08.977258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.588 #58 NEW cov: 11806 ft: 13890 corp: 33/2815b lim: 320 exec/s: 58 rss: 70Mb L: 94/222 MS: 1 InsertByte- 00:07:12.588 [2024-11-17 23:07:09.027520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (7e) qid:0 cid:4 nsid:7e7e7e7e cdw10:7e7e3b7e cdw11:7e7e7e7e SGL TRANSPORT DATA BLOCK TRANSPORT 0x7e7e7e7e7e7e7e7e 00:07:12.588 [2024-11-17 23:07:09.027555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:12.588 #59 NEW cov: 11806 ft: 13900 corp: 34/2880b lim: 320 exec/s: 29 rss: 70Mb L: 65/222 MS: 1 ChangeByte- 00:07:12.588 #59 DONE cov: 11806 ft: 13900 corp: 34/2880b lim: 320 exec/s: 29 rss: 70Mb 00:07:12.588 ###### Recommended dictionary. ###### 00:07:12.588 "\276f\343\346G\017\213\000" # Uses: 1 00:07:12.588 "\377\377\377\013" # Uses: 0 00:07:12.588 ###### End of recommended dictionary. ###### 00:07:12.588 Done 59 runs in 2 second(s) 00:07:12.588 23:07:09 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_0.conf 00:07:12.588 23:07:09 -- ../common.sh@72 -- # (( i++ )) 00:07:12.588 23:07:09 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:12.588 23:07:09 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:07:12.588 23:07:09 -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:07:12.588 23:07:09 -- nvmf/run.sh@24 -- # local timen=1 00:07:12.588 23:07:09 -- nvmf/run.sh@25 -- # local core=0x1 00:07:12.588 23:07:09 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:12.588 23:07:09 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:07:12.588 23:07:09 -- nvmf/run.sh@29 -- # printf %02d 1 00:07:12.588 23:07:09 -- nvmf/run.sh@29 -- # port=4401 00:07:12.588 23:07:09 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:12.588 23:07:09 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:07:12.588 23:07:09 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:12.588 23:07:09 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 -r /var/tmp/spdk1.sock 00:07:12.847 [2024-11-17 23:07:09.204266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.847 [2024-11-17 23:07:09.204331] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297082 ] 00:07:12.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.847 [2024-11-17 23:07:09.380286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.847 [2024-11-17 23:07:09.443382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.847 [2024-11-17 23:07:09.443508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.106 [2024-11-17 23:07:09.501897] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.106 [2024-11-17 23:07:09.518202] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:07:13.106 INFO: Running with entropic power schedule (0xFF, 100). 00:07:13.106 INFO: Seed: 1738189226 00:07:13.106 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:13.106 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:13.106 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:07:13.106 INFO: A corpus is not provided, starting from an empty corpus 00:07:13.106 #2 INITED exec/s: 0 rss: 60Mb 00:07:13.106 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:13.106 This may also happen if the target rejected all inputs we tried so far 00:07:13.106 [2024-11-17 23:07:09.563263] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.106 [2024-11-17 23:07:09.563384] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.106 [2024-11-17 23:07:09.563491] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.106 [2024-11-17 23:07:09.563608] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.106 [2024-11-17 23:07:09.563813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.106 [2024-11-17 23:07:09.563846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.106 [2024-11-17 23:07:09.563898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.106 [2024-11-17 23:07:09.563912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.106 [2024-11-17 23:07:09.563965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.106 [2024-11-17 23:07:09.563978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.106 [2024-11-17 23:07:09.564029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.106 [2024-11-17 23:07:09.564042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.365 NEW_FUNC[1/671]: 0x43b158 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:07:13.365 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:13.365 #4 NEW cov: 11622 ft: 11623 corp: 2/29b lim: 30 exec/s: 0 rss: 68Mb L: 28/28 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:13.365 [2024-11-17 23:07:09.883980] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.884101] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.884204] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.884412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.884445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.365 [2024-11-17 23:07:09.884497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.884511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.365 [2024-11-17 23:07:09.884562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.884576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.365 #5 NEW cov: 11735 ft: 12536 corp: 3/52b lim: 30 exec/s: 0 rss: 68Mb L: 23/28 MS: 1 EraseBytes- 00:07:13.365 [2024-11-17 23:07:09.934183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.934208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.365 #13 NEW cov: 11773 ft: 13135 corp: 4/60b lim: 30 exec/s: 0 rss: 68Mb L: 8/28 MS: 3 CrossOver-CopyPart-InsertRepeatedBytes- 00:07:13.365 [2024-11-17 23:07:09.974194] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.974299] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.974402] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.974509] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.365 [2024-11-17 23:07:09.974714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.974741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.365 [2024-11-17 23:07:09.974794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.974808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.365 [2024-11-17 23:07:09.974860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.974873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.365 [2024-11-17 23:07:09.974925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.365 [2024-11-17 23:07:09.974938] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.624 #15 NEW cov: 11858 ft: 13364 corp: 5/89b lim: 30 exec/s: 0 rss: 68Mb L: 29/29 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:13.624 [2024-11-17 23:07:10.014207] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (26880) > len (4) 00:07:13.624 [2024-11-17 23:07:10.014409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.624 [2024-11-17 23:07:10.014434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.624 #16 NEW cov: 11864 ft: 13630 corp: 6/98b lim: 30 exec/s: 0 rss: 68Mb L: 9/29 MS: 1 InsertByte- 00:07:13.624 [2024-11-17 23:07:10.064401] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.624 [2024-11-17 23:07:10.064512] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.625 [2024-11-17 23:07:10.064730] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.625 [2024-11-17 23:07:10.064755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.625 [2024-11-17 23:07:10.064809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.625 [2024-11-17 23:07:10.064823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.625 #17 NEW cov: 11864 ft: 13909 corp: 7/113b lim: 30 exec/s: 0 rss: 68Mb L: 15/29 MS: 1 EraseBytes- 00:07:13.625 [2024-11-17 23:07:10.114717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.625 [2024-11-17 23:07:10.114744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.625 #18 NEW cov: 11864 ft: 14068 corp: 8/122b lim: 30 exec/s: 0 rss: 68Mb L: 9/29 MS: 1 InsertByte- 00:07:13.625 [2024-11-17 23:07:10.154604] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xa 00:07:13.625 [2024-11-17 23:07:10.154803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00690000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.625 [2024-11-17 23:07:10.154827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.625 #24 NEW cov: 11864 ft: 14105 corp: 9/128b lim: 30 exec/s: 0 rss: 68Mb L: 6/29 MS: 1 EraseBytes- 00:07:13.625 [2024-11-17 23:07:10.194765] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x97ff 00:07:13.625 [2024-11-17 23:07:10.194966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00020000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.625 [2024-11-17 23:07:10.194990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.625 #25 NEW cov: 11864 ft: 14140 corp: 10/137b lim: 30 exec/s: 0 rss: 68Mb L: 9/29 MS: 1 ChangeBinInt- 00:07:13.625 [2024-11-17 23:07:10.235056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.625 [2024-11-17 23:07:10.235081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.884 #26 NEW cov: 11864 ft: 14212 corp: 11/146b lim: 30 exec/s: 0 rss: 68Mb L: 9/29 MS: 1 CMP- DE: "\000\000"- 00:07:13.884 [2024-11-17 23:07:10.285040] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (787456) > buf size (4096) 00:07:13.884 [2024-11-17 23:07:10.285238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff8397 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.285262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.884 #27 NEW cov: 11872 ft: 14335 corp: 12/155b lim: 30 exec/s: 0 rss: 69Mb L: 9/29 MS: 1 ShuffleBytes- 00:07:13.884 [2024-11-17 23:07:10.335255] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.335379] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.335481] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.335700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.335726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.335781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.335795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.335848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.335861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.884 #28 NEW cov: 11872 ft: 14376 corp: 13/174b lim: 30 exec/s: 0 rss: 69Mb L: 19/29 MS: 1 EraseBytes- 00:07:13.884 [2024-11-17 23:07:10.375406] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:07:13.884 [2024-11-17 23:07:10.375613] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.375715] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.375922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.375948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.376001] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000069 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.376018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.376071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.376085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.376136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.376149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:13.884 #29 NEW cov: 11872 ft: 14452 corp: 14/202b lim: 30 exec/s: 0 rss: 69Mb L: 28/29 MS: 1 CrossOver- 00:07:13.884 [2024-11-17 23:07:10.415426] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (26880) > len (4) 00:07:13.884 [2024-11-17 23:07:10.415544] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (786436) > buf size (4096) 00:07:13.884 [2024-11-17 23:07:10.415745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.415770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.415823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:0000838b cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.415837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.884 #30 NEW cov: 11872 ft: 14521 corp: 15/219b lim: 30 exec/s: 0 rss: 69Mb L: 17/29 MS: 1 CMP- DE: "\000\213\017I0\034\346\210"- 00:07:13.884 [2024-11-17 23:07:10.455587] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.455699] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:13.884 [2024-11-17 23:07:10.455831] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100001e02 00:07:13.884 [2024-11-17 23:07:10.456030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.456056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.456109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.456124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:13.884 [2024-11-17 23:07:10.456175] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.456188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:13.884 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:13.884 #31 NEW cov: 11895 ft: 14618 corp: 16/242b lim: 30 exec/s: 0 rss: 69Mb L: 23/29 MS: 1 CMP- DE: "II\036\002\000\000\000\000"- 00:07:13.884 [2024-11-17 23:07:10.495774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:13.884 [2024-11-17 23:07:10.495800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.144 #32 NEW cov: 11895 ft: 14643 corp: 17/250b lim: 30 exec/s: 0 rss: 69Mb L: 8/29 MS: 1 PersAutoDict- DE: "\000\000"- 00:07:14.144 [2024-11-17 23:07:10.535716] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (534824) > buf size (4096) 00:07:14.144 [2024-11-17 23:07:10.535915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a490249 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.144 [2024-11-17 23:07:10.535940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.144 #33 NEW cov: 11895 ft: 14707 corp: 18/259b lim: 30 exec/s: 33 rss: 69Mb L: 9/29 MS: 1 PersAutoDict- DE: "II\036\002\000\000\000\000"- 00:07:14.144 [2024-11-17 23:07:10.576018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.144 [2024-11-17 23:07:10.576043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.144 #34 NEW cov: 11895 ft: 14721 corp: 19/268b lim: 30 exec/s: 34 rss: 69Mb L: 9/29 MS: 1 CopyPart- 00:07:14.144 [2024-11-17 23:07:10.615973] ctrlr.c:2547:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (26880) > len (4) 00:07:14.144 [2024-11-17 23:07:10.616169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.144 [2024-11-17 23:07:10.616193] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.144 #35 NEW cov: 11895 ft: 14758 corp: 20/277b lim: 30 exec/s: 35 rss: 69Mb L: 9/29 MS: 1 ChangeBinInt- 00:07:14.144 [2024-11-17 23:07:10.656061] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10536) > buf size (4096) 00:07:14.144 [2024-11-17 23:07:10.656258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a490049 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.144 [2024-11-17 23:07:10.656283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.144 #36 NEW cov: 11895 ft: 14838 corp: 21/286b lim: 30 exec/s: 36 rss: 69Mb L: 9/29 MS: 1 ChangeByte- 00:07:14.144 [2024-11-17 23:07:10.696378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.144 [2024-11-17 23:07:10.696403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.145 #37 NEW cov: 11895 ft: 14854 corp: 22/295b lim: 30 exec/s: 37 rss: 69Mb L: 9/29 MS: 1 ShuffleBytes- 00:07:14.145 [2024-11-17 23:07:10.736392] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:07:14.145 [2024-11-17 23:07:10.736604] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.145 [2024-11-17 23:07:10.736708] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.145 [2024-11-17 23:07:10.736904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.145 [2024-11-17 23:07:10.736928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.145 [2024-11-17 23:07:10.736980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000069 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.145 [2024-11-17 23:07:10.736994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.145 [2024-11-17 23:07:10.737045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.145 [2024-11-17 23:07:10.737059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.145 [2024-11-17 23:07:10.737110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.145 [2024-11-17 23:07:10.737127] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.404 #38 NEW cov: 11895 ft: 14889 corp: 23/324b lim: 30 exec/s: 38 rss: 69Mb L: 29/29 MS: 1 InsertByte- 00:07:14.404 [2024-11-17 23:07:10.776415] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10536) > buf size (4096) 00:07:14.404 [2024-11-17 23:07:10.776635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a490049 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.776660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.404 #39 NEW cov: 11895 ft: 14911 corp: 24/331b lim: 30 exec/s: 39 rss: 69Mb L: 7/29 MS: 1 EraseBytes- 00:07:14.404 [2024-11-17 23:07:10.816649] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:07:14.404 [2024-11-17 23:07:10.816849] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.404 [2024-11-17 23:07:10.816952] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.404 [2024-11-17 23:07:10.817151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.817176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.404 [2024-11-17 23:07:10.817230] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000069 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.817244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.404 [2024-11-17 23:07:10.817295] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.817309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.404 [2024-11-17 23:07:10.817360] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.817373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.404 #40 NEW cov: 11895 ft: 14938 corp: 25/359b lim: 30 exec/s: 40 rss: 69Mb L: 28/29 MS: 1 EraseBytes- 00:07:14.404 [2024-11-17 23:07:10.856687] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x49 00:07:14.404 [2024-11-17 23:07:10.856795] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (74876) > buf size (4096) 00:07:14.404 [2024-11-17 23:07:10.856986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.857011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.404 [2024-11-17 23:07:10.857063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:491e0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.857077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.404 #41 NEW cov: 11895 ft: 14958 corp: 26/376b lim: 30 exec/s: 41 rss: 69Mb L: 17/29 MS: 1 PersAutoDict- DE: "II\036\002\000\000\000\000"- 00:07:14.404 [2024-11-17 23:07:10.896798] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000096ff 00:07:14.404 [2024-11-17 23:07:10.897006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00fb83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.897034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.404 #42 NEW cov: 11895 ft: 15009 corp: 27/385b lim: 30 exec/s: 42 rss: 69Mb L: 9/29 MS: 1 ChangeBinInt- 00:07:14.404 [2024-11-17 23:07:10.937053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.937078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.404 #43 NEW cov: 11895 ft: 15018 corp: 28/393b lim: 30 exec/s: 43 rss: 70Mb L: 8/29 MS: 1 CopyPart- 00:07:14.404 [2024-11-17 23:07:10.977084] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:07:14.404 [2024-11-17 23:07:10.977209] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:14.404 [2024-11-17 23:07:10.977339] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.404 [2024-11-17 23:07:10.977442] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.404 [2024-11-17 23:07:10.977663] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.977688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.404 [2024-11-17 23:07:10.977740] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000069 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.977754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.404 [2024-11-17 23:07:10.977806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.404 [2024-11-17 23:07:10.977820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.405 [2024-11-17 23:07:10.977873] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.405 [2024-11-17 23:07:10.977886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.405 #44 NEW cov: 11895 ft: 15030 corp: 29/418b lim: 30 exec/s: 44 rss: 70Mb L: 25/29 MS: 1 EraseBytes- 00:07:14.665 [2024-11-17 23:07:11.017252] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (797696) > buf size (4096) 00:07:14.665 [2024-11-17 23:07:11.017361] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0xffff 00:07:14.665 [2024-11-17 23:07:11.017464] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.665 [2024-11-17 23:07:11.017583] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.665 [2024-11-17 23:07:11.017686] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.665 [2024-11-17 23:07:11.017884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.017908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 [2024-11-17 23:07:11.017961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000069 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.017975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.665 [2024-11-17 23:07:11.018025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.018039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.665 [2024-11-17 23:07:11.018094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.018107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.665 [2024-11-17 23:07:11.018160] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.018174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.665 #50 NEW cov: 11895 ft: 15128 corp: 30/448b lim: 30 exec/s: 50 rss: 70Mb L: 30/30 MS: 1 CopyPart- 00:07:14.665 [2024-11-17 23:07:11.057223] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:07:14.665 [2024-11-17 23:07:11.057431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.057455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 #51 NEW cov: 11895 ft: 15152 corp: 31/457b lim: 30 exec/s: 51 rss: 70Mb L: 9/30 MS: 1 ChangeBinInt- 00:07:14.665 [2024-11-17 23:07:11.097315] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:14.665 [2024-11-17 23:07:11.097516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.097546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 #52 NEW cov: 11895 ft: 15173 corp: 32/467b lim: 30 exec/s: 52 rss: 70Mb L: 10/30 MS: 1 CrossOver- 00:07:14.665 [2024-11-17 23:07:11.137446] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:14.665 [2024-11-17 23:07:11.137650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.137675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 #53 NEW cov: 11895 ft: 15181 corp: 33/477b lim: 30 exec/s: 53 rss: 70Mb L: 10/30 MS: 1 ChangeBinInt- 00:07:14.665 [2024-11-17 23:07:11.177612] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (796676) > buf size (4096) 00:07:14.665 [2024-11-17 23:07:11.177826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0083ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.177850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 #54 NEW cov: 11895 ft: 15183 corp: 34/487b lim: 30 exec/s: 54 rss: 70Mb L: 10/30 MS: 1 CrossOver- 00:07:14.665 [2024-11-17 23:07:11.207636] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2ff 00:07:14.665 [2024-11-17 23:07:11.207843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0000ff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.207867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 #55 NEW cov: 11895 ft: 15191 corp: 35/494b lim: 30 exec/s: 55 rss: 70Mb L: 7/30 MS: 1 EraseBytes- 00:07:14.665 [2024-11-17 23:07:11.247798] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x3000097ff 00:07:14.665 [2024-11-17 23:07:11.248000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:000a8300 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.665 [2024-11-17 23:07:11.248023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.665 #56 NEW cov: 11895 ft: 15203 corp: 36/505b lim: 30 exec/s: 56 rss: 70Mb L: 11/30 MS: 1 InsertByte- 00:07:14.925 [2024-11-17 23:07:11.288065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.288090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 #57 NEW cov: 11895 ft: 15214 corp: 37/514b lim: 30 exec/s: 57 rss: 70Mb L: 9/30 MS: 1 ChangeBinInt- 00:07:14.925 [2024-11-17 23:07:11.328111] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1 00:07:14.925 [2024-11-17 23:07:11.328311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.328335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 #58 NEW cov: 11895 ft: 15217 corp: 38/523b lim: 30 exec/s: 58 rss: 70Mb L: 9/30 MS: 1 ChangeBit- 00:07:14.925 [2024-11-17 23:07:11.368126] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (264236) > buf size (4096) 00:07:14.925 [2024-11-17 23:07:11.368327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:020a8100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.368352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 #59 NEW cov: 11895 ft: 15235 corp: 39/530b lim: 30 exec/s: 59 rss: 70Mb L: 7/30 MS: 1 ShuffleBytes- 00:07:14.925 [2024-11-17 23:07:11.408341] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.925 [2024-11-17 23:07:11.408454] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.925 [2024-11-17 23:07:11.408564] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100001e02 00:07:14.925 [2024-11-17 23:07:11.408763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:35ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.408787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.408840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.408854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.408906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff8149 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.408920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.925 #60 NEW cov: 11895 ft: 15247 corp: 40/553b lim: 30 exec/s: 60 rss: 70Mb L: 23/30 MS: 1 ChangeByte- 00:07:14.925 [2024-11-17 23:07:11.448475] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000ffff 00:07:14.925 [2024-11-17 23:07:11.448594] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.925 [2024-11-17 23:07:11.448700] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.925 [2024-11-17 23:07:11.448801] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.925 [2024-11-17 23:07:11.448903] ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:07:14.925 [2024-11-17 23:07:11.449096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a0a81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.449119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.449176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.449190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.449240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.449254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.449307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.449320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.449371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:8 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.449385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:14.925 #61 NEW cov: 11895 ft: 15259 corp: 41/583b lim: 30 exec/s: 61 rss: 70Mb L: 30/30 MS: 1 InsertByte- 00:07:14.925 [2024-11-17 23:07:11.488623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.488648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 #62 NEW cov: 11895 ft: 15273 corp: 42/591b lim: 30 exec/s: 62 rss: 70Mb L: 8/30 MS: 1 ShuffleBytes- 00:07:14.925 [2024-11-17 23:07:11.528644] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (10244) > buf size (4096) 00:07:14.925 [2024-11-17 23:07:11.528777] ctrlr.c:2516:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (262148) > buf size (4096) 00:07:14.925 [2024-11-17 23:07:11.529076] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:0a00000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.529100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.529154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008100 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.529170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:14.925 [2024-11-17 23:07:11.529222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:14.925 [2024-11-17 23:07:11.529235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.185 #63 NEW cov: 11895 ft: 15314 corp: 43/609b lim: 30 exec/s: 31 rss: 70Mb L: 18/30 MS: 1 InsertRepeatedBytes- 00:07:15.185 #63 DONE cov: 11895 ft: 15314 corp: 43/609b lim: 30 exec/s: 31 rss: 70Mb 00:07:15.185 ###### Recommended dictionary. ###### 00:07:15.185 "\000\000" # Uses: 1 00:07:15.185 "\000\213\017I0\034\346\210" # Uses: 0 00:07:15.185 "II\036\002\000\000\000\000" # Uses: 2 00:07:15.185 ###### End of recommended dictionary. ###### 00:07:15.185 Done 63 runs in 2 second(s) 00:07:15.185 23:07:11 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_1.conf 00:07:15.185 23:07:11 -- ../common.sh@72 -- # (( i++ )) 00:07:15.185 23:07:11 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:15.185 23:07:11 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:07:15.185 23:07:11 -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:07:15.185 23:07:11 -- nvmf/run.sh@24 -- # local timen=1 00:07:15.185 23:07:11 -- nvmf/run.sh@25 -- # local core=0x1 00:07:15.185 23:07:11 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:15.185 23:07:11 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:07:15.185 23:07:11 -- nvmf/run.sh@29 -- # printf %02d 2 00:07:15.185 23:07:11 -- nvmf/run.sh@29 -- # port=4402 00:07:15.185 23:07:11 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:15.185 23:07:11 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:07:15.185 23:07:11 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:15.185 23:07:11 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 -r /var/tmp/spdk2.sock 00:07:15.185 [2024-11-17 23:07:11.701473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.185 [2024-11-17 23:07:11.701544] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297617 ] 00:07:15.185 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.444 [2024-11-17 23:07:11.874670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.444 [2024-11-17 23:07:11.938291] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.444 [2024-11-17 23:07:11.938421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.444 [2024-11-17 23:07:11.996678] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.444 [2024-11-17 23:07:12.012993] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:07:15.444 INFO: Running with entropic power schedule (0xFF, 100). 00:07:15.444 INFO: Seed: 4232192045 00:07:15.444 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:15.444 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:15.444 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:07:15.444 INFO: A corpus is not provided, starting from an empty corpus 00:07:15.444 #2 INITED exec/s: 0 rss: 60Mb 00:07:15.444 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:15.444 This may also happen if the target rejected all inputs we tried so far 00:07:15.703 [2024-11-17 23:07:12.068437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.703 [2024-11-17 23:07:12.068465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.703 [2024-11-17 23:07:12.068519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.703 [2024-11-17 23:07:12.068538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.703 [2024-11-17 23:07:12.068589] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.703 [2024-11-17 23:07:12.068602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.962 NEW_FUNC[1/670]: 0x43db78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:07:15.962 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:15.962 #6 NEW cov: 11580 ft: 11581 corp: 2/25b lim: 35 exec/s: 0 rss: 68Mb L: 24/24 MS: 4 CrossOver-ChangeByte-InsertByte-InsertRepeatedBytes- 00:07:15.962 [2024-11-17 23:07:12.389195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.962 [2024-11-17 23:07:12.389251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.389336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.389353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.963 #7 NEW cov: 11693 ft: 12312 corp: 3/40b lim: 35 exec/s: 0 rss: 68Mb L: 15/24 MS: 1 EraseBytes- 00:07:15.963 [2024-11-17 23:07:12.439296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.439322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.439375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.439389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.439442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:ff004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.439455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.963 #8 NEW cov: 11699 ft: 12582 corp: 4/66b lim: 35 exec/s: 0 rss: 68Mb L: 26/26 MS: 1 InsertRepeatedBytes- 00:07:15.963 [2024-11-17 23:07:12.479288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.479312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.479366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.479381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.963 #9 NEW cov: 11784 ft: 12761 corp: 5/82b lim: 35 exec/s: 0 rss: 68Mb L: 16/26 MS: 1 InsertByte- 00:07:15.963 [2024-11-17 23:07:12.519492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.519516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.519574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.519604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.519659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44ff0044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.519673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.963 #10 NEW cov: 11784 ft: 12821 corp: 6/109b lim: 35 exec/s: 0 rss: 68Mb L: 27/27 MS: 1 InsertByte- 00:07:15.963 [2024-11-17 23:07:12.559386] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:15.963 [2024-11-17 23:07:12.559505] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:15.963 [2024-11-17 23:07:12.559629] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:15.963 [2024-11-17 23:07:12.559839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.559864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.559919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.559935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.559988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.560004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:15.963 [2024-11-17 23:07:12.560056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:15.963 [2024-11-17 23:07:12.560071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.222 #11 NEW cov: 11793 ft: 13372 corp: 7/139b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 InsertRepeatedBytes- 00:07:16.222 [2024-11-17 23:07:12.599505] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.222 [2024-11-17 23:07:12.599636] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.222 [2024-11-17 23:07:12.599745] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.222 [2024-11-17 23:07:12.599947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.222 [2024-11-17 23:07:12.599972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.222 [2024-11-17 23:07:12.600025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.222 [2024-11-17 23:07:12.600042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.222 [2024-11-17 23:07:12.600095] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.222 [2024-11-17 23:07:12.600110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.222 [2024-11-17 23:07:12.600163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.222 [2024-11-17 23:07:12.600178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.222 #12 NEW cov: 11793 ft: 13498 corp: 8/169b lim: 35 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 CopyPart- 00:07:16.223 [2024-11-17 23:07:12.639763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.639788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.639842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.639856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.223 #13 NEW cov: 11793 ft: 13615 corp: 9/185b lim: 35 exec/s: 0 rss: 69Mb L: 16/30 MS: 1 ChangeByte- 00:07:16.223 [2024-11-17 23:07:12.679990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:d200d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.680016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.680072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00d2 cdw11:ff00ff23 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.680086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.680139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ff40 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.680153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.223 #14 NEW cov: 11793 ft: 13669 corp: 10/206b lim: 35 exec/s: 0 rss: 69Mb L: 21/30 MS: 1 InsertRepeatedBytes- 00:07:16.223 [2024-11-17 23:07:12.720092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:d200d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.720116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.720185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00d2 cdw11:2300ffeb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.720199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.720253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:4000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.720266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.223 #15 NEW cov: 11793 ft: 13702 corp: 11/228b lim: 35 exec/s: 0 rss: 69Mb L: 22/30 MS: 1 InsertByte- 00:07:16.223 [2024-11-17 23:07:12.760087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.760112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.760168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.760181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.223 #16 NEW cov: 11793 ft: 13738 corp: 12/243b lim: 35 exec/s: 0 rss: 69Mb L: 15/30 MS: 1 CrossOver- 00:07:16.223 [2024-11-17 23:07:12.800356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00d2 cdw11:2300ffeb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.800380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.223 [2024-11-17 23:07:12.800435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:4000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.223 [2024-11-17 23:07:12.800449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.223 NEW_FUNC[1/1]: 0x10ba328 in spdk_nvmf_ctrlr_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2921 00:07:16.223 #17 NEW cov: 11810 ft: 13811 corp: 13/265b lim: 35 exec/s: 0 rss: 69Mb L: 22/30 MS: 1 CMP- DE: "\006\000"- 00:07:16.482 [2024-11-17 23:07:12.850231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.850255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.482 #18 NEW cov: 11810 ft: 14138 corp: 14/277b lim: 35 exec/s: 0 rss: 69Mb L: 12/30 MS: 1 EraseBytes- 00:07:16.482 [2024-11-17 23:07:12.890281] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.482 [2024-11-17 23:07:12.890395] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.482 [2024-11-17 23:07:12.890599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.890624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.482 [2024-11-17 23:07:12.890678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.890695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.482 [2024-11-17 23:07:12.890746] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.890762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.482 #19 NEW cov: 11810 ft: 14182 corp: 15/304b lim: 35 exec/s: 0 rss: 69Mb L: 27/30 MS: 1 EraseBytes- 00:07:16.482 [2024-11-17 23:07:12.930847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.930873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.482 [2024-11-17 23:07:12.930908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.930921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.482 [2024-11-17 23:07:12.930976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44ff0044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.482 [2024-11-17 23:07:12.930989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.482 [2024-11-17 23:07:12.931044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:8b8b00ff cdw11:8b008b8b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:12.931057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.483 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:16.483 #20 NEW cov: 11833 ft: 14236 corp: 16/336b lim: 35 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 InsertRepeatedBytes- 00:07:16.483 [2024-11-17 23:07:12.980596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:12.980620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.483 #21 NEW cov: 11833 ft: 14269 corp: 17/348b lim: 35 exec/s: 0 rss: 69Mb L: 12/32 MS: 1 ShuffleBytes- 00:07:16.483 [2024-11-17 23:07:13.021182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:13.021206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.483 [2024-11-17 23:07:13.021264] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:13.021277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.483 [2024-11-17 23:07:13.021330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:4400ff44 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:13.021343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.483 [2024-11-17 23:07:13.021398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:44440044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:13.021410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:16.483 [2024-11-17 23:07:13.021463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:13.021477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:16.483 #22 NEW cov: 11833 ft: 14322 corp: 18/383b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:16.483 [2024-11-17 23:07:13.060797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:ff00eb23 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.483 [2024-11-17 23:07:13.060823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.483 #23 NEW cov: 11833 ft: 14433 corp: 19/393b lim: 35 exec/s: 23 rss: 69Mb L: 10/35 MS: 1 CrossOver- 00:07:16.742 [2024-11-17 23:07:13.101125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.742 [2024-11-17 23:07:13.101150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.742 [2024-11-17 23:07:13.101206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:2700ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.742 [2024-11-17 23:07:13.101220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.742 [2024-11-17 23:07:13.101274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.742 [2024-11-17 23:07:13.101287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.742 #24 NEW cov: 11833 ft: 14500 corp: 20/417b lim: 35 exec/s: 24 rss: 69Mb L: 24/35 MS: 1 CrossOver- 00:07:16.742 [2024-11-17 23:07:13.141240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ff60 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.742 [2024-11-17 23:07:13.141265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.742 [2024-11-17 23:07:13.141321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.742 [2024-11-17 23:07:13.141334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.743 [2024-11-17 23:07:13.141404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:ff004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.141419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:16.743 #30 NEW cov: 11833 ft: 14513 corp: 21/443b lim: 35 exec/s: 30 rss: 69Mb L: 26/35 MS: 1 ChangeByte- 00:07:16.743 [2024-11-17 23:07:13.181132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.181158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.743 #31 NEW cov: 11833 ft: 14546 corp: 22/456b lim: 35 exec/s: 31 rss: 69Mb L: 13/35 MS: 1 InsertByte- 00:07:16.743 [2024-11-17 23:07:13.221265] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.221289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.743 #32 NEW cov: 11833 ft: 14556 corp: 23/468b lim: 35 exec/s: 32 rss: 69Mb L: 12/35 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:16.743 [2024-11-17 23:07:13.261477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.261501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.743 [2024-11-17 23:07:13.261560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0a00ff cdw11:ff0027b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.261574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:16.743 #33 NEW cov: 11833 ft: 14573 corp: 24/483b lim: 35 exec/s: 33 rss: 69Mb L: 15/35 MS: 1 CopyPart- 00:07:16.743 [2024-11-17 23:07:13.291468] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.291491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.743 #34 NEW cov: 11833 ft: 14639 corp: 25/494b lim: 35 exec/s: 34 rss: 69Mb L: 11/35 MS: 1 EraseBytes- 00:07:16.743 [2024-11-17 23:07:13.331714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.331738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:16.743 [2024-11-17 23:07:13.331793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0a00ff cdw11:060027b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:16.743 [2024-11-17 23:07:13.331806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.002 #35 NEW cov: 11833 ft: 14711 corp: 26/509b lim: 35 exec/s: 35 rss: 69Mb L: 15/35 MS: 1 PersAutoDict- DE: "\006\000"- 00:07:17.002 [2024-11-17 23:07:13.371644] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.002 [2024-11-17 23:07:13.371852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.371877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.002 [2024-11-17 23:07:13.371933] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ff0a0005 cdw11:ff0027b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.371949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.002 NEW_FUNC[1/1]: 0x10b8018 in spdk_nvmf_ns_identify_iocs_specific /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:2843 00:07:17.002 #36 NEW cov: 11851 ft: 14759 corp: 27/524b lim: 35 exec/s: 36 rss: 69Mb L: 15/35 MS: 1 ChangeBinInt- 00:07:17.002 [2024-11-17 23:07:13.411831] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.002 [2024-11-17 23:07:13.411963] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.002 [2024-11-17 23:07:13.412269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.412295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.002 [2024-11-17 23:07:13.412351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.412366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.002 [2024-11-17 23:07:13.412420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:e0000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.412435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.002 [2024-11-17 23:07:13.412488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:e0e000e0 cdw11:0000e000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.412502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.002 #37 NEW cov: 11851 ft: 14772 corp: 28/557b lim: 35 exec/s: 37 rss: 69Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:17.002 [2024-11-17 23:07:13.451907] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.002 [2024-11-17 23:07:13.452118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.002 [2024-11-17 23:07:13.452143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.452196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.452211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 #38 NEW cov: 11851 ft: 14812 corp: 29/572b lim: 35 exec/s: 38 rss: 70Mb L: 15/35 MS: 1 ChangeBinInt- 00:07:17.003 [2024-11-17 23:07:13.482020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ff05 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.482044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 #39 NEW cov: 11851 ft: 14826 corp: 30/584b lim: 35 exec/s: 39 rss: 70Mb L: 12/35 MS: 1 EraseBytes- 00:07:17.003 [2024-11-17 23:07:13.522526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.522553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.522606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.522619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.522671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.522684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.522741] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.522754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.003 #40 NEW cov: 11851 ft: 14838 corp: 31/617b lim: 35 exec/s: 40 rss: 70Mb L: 33/35 MS: 1 InsertRepeatedBytes- 00:07:17.003 [2024-11-17 23:07:13.562479] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.003 [2024-11-17 23:07:13.562705] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.562730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.562785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.562799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.562853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.562866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.562919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:44000044 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.562933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.003 #41 NEW cov: 11851 ft: 14910 corp: 32/651b lim: 35 exec/s: 41 rss: 70Mb L: 34/35 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\000"- 00:07:17.003 [2024-11-17 23:07:13.602356] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.003 [2024-11-17 23:07:13.602657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:0000ff01 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.602681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.602737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00050005 cdw11:2700ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.602752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.003 [2024-11-17 23:07:13.602807] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ff0a00ff cdw11:ff0027b5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.003 [2024-11-17 23:07:13.602820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.262 #42 NEW cov: 11851 ft: 14921 corp: 33/673b lim: 35 exec/s: 42 rss: 70Mb L: 22/35 MS: 1 CopyPart- 00:07:17.262 [2024-11-17 23:07:13.642443] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.262 [2024-11-17 23:07:13.642659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.262 [2024-11-17 23:07:13.642684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.262 [2024-11-17 23:07:13.642738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:8f000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.262 [2024-11-17 23:07:13.642753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.263 #43 NEW cov: 11851 ft: 14935 corp: 34/688b lim: 35 exec/s: 43 rss: 70Mb L: 15/35 MS: 1 ChangeBit- 00:07:17.263 [2024-11-17 23:07:13.682679] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.263 [2024-11-17 23:07:13.682920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.682944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.683002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0000ff0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.683016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.683069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.683083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.263 #44 NEW cov: 11851 ft: 14944 corp: 35/712b lim: 35 exec/s: 44 rss: 70Mb L: 24/35 MS: 1 CMP- DE: "\017\000\000\000\000\000\000\000"- 00:07:17.263 [2024-11-17 23:07:13.723007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.723030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.723084] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.723098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.723153] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.723166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.263 #45 NEW cov: 11851 ft: 14975 corp: 36/738b lim: 35 exec/s: 45 rss: 70Mb L: 26/35 MS: 1 CopyPart- 00:07:17.263 [2024-11-17 23:07:13.753193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.753217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.753268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.753282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.753332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44ff0044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.753345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.753397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:8b8b00ff cdw11:8b008b8b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.753410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.263 #46 NEW cov: 11851 ft: 14980 corp: 37/770b lim: 35 exec/s: 46 rss: 70Mb L: 32/35 MS: 1 CopyPart- 00:07:17.263 [2024-11-17 23:07:13.792902] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.263 [2024-11-17 23:07:13.793116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ffff000a cdw11:00000100 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.793140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.793195] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:230000eb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.793211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.263 #47 NEW cov: 11851 ft: 14990 corp: 38/788b lim: 35 exec/s: 47 rss: 70Mb L: 18/35 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\000"- 00:07:17.263 [2024-11-17 23:07:13.833333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.833357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.833413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:2700ff0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.833426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.263 [2024-11-17 23:07:13.833479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:ffff00ff cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.833492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.263 #48 NEW cov: 11851 ft: 14995 corp: 39/812b lim: 35 exec/s: 48 rss: 70Mb L: 24/35 MS: 1 ChangeBit- 00:07:17.263 [2024-11-17 23:07:13.873197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.263 [2024-11-17 23:07:13.873221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.522 #49 NEW cov: 11851 ft: 15019 corp: 40/823b lim: 35 exec/s: 49 rss: 70Mb L: 11/35 MS: 1 CMP- DE: "\000\001"- 00:07:17.522 [2024-11-17 23:07:13.913231] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.522 [2024-11-17 23:07:13.913435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.522 [2024-11-17 23:07:13.913459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:13.913517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:f1000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:13.913536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.523 #50 NEW cov: 11851 ft: 15087 corp: 41/838b lim: 35 exec/s: 50 rss: 70Mb L: 15/35 MS: 1 ChangeBinInt- 00:07:17.523 [2024-11-17 23:07:13.953436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff0028ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:13.953459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.523 #51 NEW cov: 11851 ft: 15118 corp: 42/850b lim: 35 exec/s: 51 rss: 70Mb L: 12/35 MS: 1 ChangeByte- 00:07:17.523 [2024-11-17 23:07:13.993934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:13.993961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:13.994017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:444400ff cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:13.994030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:13.994081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:13.994094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:13.994146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:44440044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:13.994158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:14.034050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:14.034074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:14.034185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:44440044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:14.034199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:14.034254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:44440044 cdw11:44004444 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:14.034267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:17.523 #53 NEW cov: 11851 ft: 15123 corp: 43/882b lim: 35 exec/s: 53 rss: 70Mb L: 32/35 MS: 2 CopyPart-ChangeBinInt- 00:07:17.523 [2024-11-17 23:07:14.073819] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:17.523 [2024-11-17 23:07:14.074047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:27b5000a cdw11:ff00ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:14.074071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:14.074127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ffff00ff cdw11:0400ff0f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:14.074141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:17.523 [2024-11-17 23:07:14.074193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:ff000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:17.523 [2024-11-17 23:07:14.074209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:17.523 #54 NEW cov: 11851 ft: 15129 corp: 44/906b lim: 35 exec/s: 27 rss: 70Mb L: 24/35 MS: 1 ChangeBit- 00:07:17.523 #54 DONE cov: 11851 ft: 15129 corp: 44/906b lim: 35 exec/s: 27 rss: 70Mb 00:07:17.523 ###### Recommended dictionary. ###### 00:07:17.523 "\006\000" # Uses: 2 00:07:17.523 "\001\000\000\000\000\000\000\000" # Uses: 1 00:07:17.523 "\017\000\000\000\000\000\000\000" # Uses: 0 00:07:17.523 "\000\001" # Uses: 0 00:07:17.523 ###### End of recommended dictionary. ###### 00:07:17.523 Done 54 runs in 2 second(s) 00:07:17.782 23:07:14 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_2.conf 00:07:17.782 23:07:14 -- ../common.sh@72 -- # (( i++ )) 00:07:17.782 23:07:14 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:17.782 23:07:14 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:07:17.782 23:07:14 -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:07:17.782 23:07:14 -- nvmf/run.sh@24 -- # local timen=1 00:07:17.782 23:07:14 -- nvmf/run.sh@25 -- # local core=0x1 00:07:17.782 23:07:14 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:17.782 23:07:14 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:07:17.782 23:07:14 -- nvmf/run.sh@29 -- # printf %02d 3 00:07:17.782 23:07:14 -- nvmf/run.sh@29 -- # port=4403 00:07:17.782 23:07:14 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:17.782 23:07:14 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:07:17.782 23:07:14 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:17.782 23:07:14 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 -r /var/tmp/spdk3.sock 00:07:17.783 [2024-11-17 23:07:14.260645] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.783 [2024-11-17 23:07:14.260711] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1297934 ] 00:07:17.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.041 [2024-11-17 23:07:14.441881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.041 [2024-11-17 23:07:14.505480] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:18.041 [2024-11-17 23:07:14.505627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.041 [2024-11-17 23:07:14.563543] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.041 [2024-11-17 23:07:14.579872] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:07:18.041 INFO: Running with entropic power schedule (0xFF, 100). 00:07:18.041 INFO: Seed: 2505255853 00:07:18.041 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:18.041 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:18.041 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:07:18.041 INFO: A corpus is not provided, starting from an empty corpus 00:07:18.041 #2 INITED exec/s: 0 rss: 60Mb 00:07:18.041 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:18.041 This may also happen if the target rejected all inputs we tried so far 00:07:18.559 NEW_FUNC[1/659]: 0x43f858 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:07:18.559 NEW_FUNC[2/659]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:18.560 #7 NEW cov: 11474 ft: 11477 corp: 2/5b lim: 20 exec/s: 0 rss: 68Mb L: 4/4 MS: 5 ChangeBit-ChangeByte-CopyPart-InsertByte-CopyPart- 00:07:18.560 #14 NEW cov: 11603 ft: 12315 corp: 3/14b lim: 20 exec/s: 0 rss: 68Mb L: 9/9 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:18.560 #16 NEW cov: 11609 ft: 12713 corp: 4/20b lim: 20 exec/s: 0 rss: 68Mb L: 6/9 MS: 2 CopyPart-CrossOver- 00:07:18.560 #17 NEW cov: 11694 ft: 12886 corp: 5/28b lim: 20 exec/s: 0 rss: 68Mb L: 8/9 MS: 1 EraseBytes- 00:07:18.560 #18 NEW cov: 11694 ft: 12964 corp: 6/37b lim: 20 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CopyPart- 00:07:18.818 #19 NEW cov: 11694 ft: 13025 corp: 7/47b lim: 20 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 InsertByte- 00:07:18.818 #20 NEW cov: 11694 ft: 13071 corp: 8/56b lim: 20 exec/s: 0 rss: 68Mb L: 9/10 MS: 1 ChangeBit- 00:07:18.818 #21 NEW cov: 11694 ft: 13095 corp: 9/64b lim: 20 exec/s: 0 rss: 68Mb L: 8/10 MS: 1 ChangeBinInt- 00:07:18.819 #24 NEW cov: 11694 ft: 13112 corp: 10/70b lim: 20 exec/s: 0 rss: 68Mb L: 6/10 MS: 3 ChangeByte-InsertByte-CrossOver- 00:07:18.819 #25 NEW cov: 11694 ft: 13222 corp: 11/80b lim: 20 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 InsertByte- 00:07:19.077 #26 NEW cov: 11694 ft: 13250 corp: 12/86b lim: 20 exec/s: 0 rss: 68Mb L: 6/10 MS: 1 ShuffleBytes- 00:07:19.077 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:19.077 #27 NEW cov: 11717 ft: 13407 corp: 13/94b lim: 20 exec/s: 0 rss: 68Mb L: 8/10 MS: 1 EraseBytes- 00:07:19.077 #28 NEW cov: 11717 ft: 13455 corp: 14/101b lim: 20 exec/s: 0 rss: 68Mb L: 7/10 MS: 1 EraseBytes- 00:07:19.077 #29 NEW cov: 11717 ft: 13471 corp: 15/109b lim: 20 exec/s: 29 rss: 69Mb L: 8/10 MS: 1 ChangeBit- 00:07:19.077 #30 NEW cov: 11717 ft: 13477 corp: 16/116b lim: 20 exec/s: 30 rss: 69Mb L: 7/10 MS: 1 CrossOver- 00:07:19.338 #36 NEW cov: 11717 ft: 13489 corp: 17/124b lim: 20 exec/s: 36 rss: 69Mb L: 8/10 MS: 1 CrossOver- 00:07:19.338 #37 NEW cov: 11717 ft: 13509 corp: 18/130b lim: 20 exec/s: 37 rss: 69Mb L: 6/10 MS: 1 EraseBytes- 00:07:19.338 #38 NEW cov: 11717 ft: 13648 corp: 19/135b lim: 20 exec/s: 38 rss: 69Mb L: 5/10 MS: 1 EraseBytes- 00:07:19.338 #39 NEW cov: 11717 ft: 13785 corp: 20/139b lim: 20 exec/s: 39 rss: 69Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:19.596 #40 NEW cov: 11717 ft: 13834 corp: 21/147b lim: 20 exec/s: 40 rss: 69Mb L: 8/10 MS: 1 ChangeBinInt- 00:07:19.597 #46 NEW cov: 11721 ft: 14119 corp: 22/162b lim: 20 exec/s: 46 rss: 69Mb L: 15/15 MS: 1 InsertRepeatedBytes- 00:07:19.597 [2024-11-17 23:07:16.051406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:19.597 [2024-11-17 23:07:16.051476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:19.597 NEW_FUNC[1/19]: 0x111e188 in nvmf_qpair_abort_request /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3224 00:07:19.597 NEW_FUNC[2/19]: 0x111ed08 in nvmf_qpair_abort_aer /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:3166 00:07:19.597 #47 NEW cov: 12039 ft: 14692 corp: 23/179b lim: 20 exec/s: 47 rss: 70Mb L: 17/17 MS: 1 InsertRepeatedBytes- 00:07:19.597 #48 NEW cov: 12039 ft: 14698 corp: 24/187b lim: 20 exec/s: 48 rss: 70Mb L: 8/17 MS: 1 ChangeBinInt- 00:07:19.597 #49 NEW cov: 12039 ft: 14707 corp: 25/196b lim: 20 exec/s: 49 rss: 70Mb L: 9/17 MS: 1 InsertByte- 00:07:19.855 #50 NEW cov: 12039 ft: 14712 corp: 26/202b lim: 20 exec/s: 50 rss: 70Mb L: 6/17 MS: 1 EraseBytes- 00:07:19.855 #51 NEW cov: 12039 ft: 14772 corp: 27/211b lim: 20 exec/s: 51 rss: 70Mb L: 9/17 MS: 1 CopyPart- 00:07:19.855 #54 NEW cov: 12039 ft: 14877 corp: 28/230b lim: 20 exec/s: 54 rss: 70Mb L: 19/19 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:07:19.855 #55 NEW cov: 12039 ft: 14888 corp: 29/238b lim: 20 exec/s: 55 rss: 70Mb L: 8/19 MS: 1 ChangeBinInt- 00:07:19.855 #56 NEW cov: 12039 ft: 14900 corp: 30/242b lim: 20 exec/s: 56 rss: 70Mb L: 4/19 MS: 1 ChangeByte- 00:07:20.114 #58 NEW cov: 12039 ft: 14927 corp: 31/246b lim: 20 exec/s: 58 rss: 70Mb L: 4/19 MS: 2 EraseBytes-InsertByte- 00:07:20.114 #59 NEW cov: 12039 ft: 14938 corp: 32/256b lim: 20 exec/s: 59 rss: 70Mb L: 10/19 MS: 1 ChangeBit- 00:07:20.114 #60 NEW cov: 12039 ft: 14945 corp: 33/262b lim: 20 exec/s: 60 rss: 70Mb L: 6/19 MS: 1 EraseBytes- 00:07:20.114 #61 NEW cov: 12039 ft: 14969 corp: 34/268b lim: 20 exec/s: 30 rss: 70Mb L: 6/19 MS: 1 ChangeBit- 00:07:20.114 #61 DONE cov: 12039 ft: 14969 corp: 34/268b lim: 20 exec/s: 30 rss: 70Mb 00:07:20.114 Done 61 runs in 2 second(s) 00:07:20.374 23:07:16 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_3.conf 00:07:20.374 23:07:16 -- ../common.sh@72 -- # (( i++ )) 00:07:20.374 23:07:16 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:20.374 23:07:16 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:07:20.374 23:07:16 -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:07:20.374 23:07:16 -- nvmf/run.sh@24 -- # local timen=1 00:07:20.374 23:07:16 -- nvmf/run.sh@25 -- # local core=0x1 00:07:20.374 23:07:16 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:20.374 23:07:16 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:07:20.374 23:07:16 -- nvmf/run.sh@29 -- # printf %02d 4 00:07:20.374 23:07:16 -- nvmf/run.sh@29 -- # port=4404 00:07:20.374 23:07:16 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:20.374 23:07:16 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:07:20.374 23:07:16 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:20.374 23:07:16 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 -r /var/tmp/spdk4.sock 00:07:20.374 [2024-11-17 23:07:16.806185] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:20.374 [2024-11-17 23:07:16.806250] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298455 ] 00:07:20.374 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.374 [2024-11-17 23:07:16.982019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.633 [2024-11-17 23:07:17.045688] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.633 [2024-11-17 23:07:17.045808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.633 [2024-11-17 23:07:17.103888] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.633 [2024-11-17 23:07:17.120173] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:07:20.633 INFO: Running with entropic power schedule (0xFF, 100). 00:07:20.633 INFO: Seed: 748255460 00:07:20.633 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:20.633 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:20.633 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:07:20.633 INFO: A corpus is not provided, starting from an empty corpus 00:07:20.633 #2 INITED exec/s: 0 rss: 60Mb 00:07:20.633 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:20.633 This may also happen if the target rejected all inputs we tried so far 00:07:20.633 [2024-11-17 23:07:17.169364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.633 [2024-11-17 23:07:17.169393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.633 [2024-11-17 23:07:17.169447] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.633 [2024-11-17 23:07:17.169461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.633 [2024-11-17 23:07:17.169514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.633 [2024-11-17 23:07:17.169527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:20.633 [2024-11-17 23:07:17.169582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.633 [2024-11-17 23:07:17.169596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:20.893 NEW_FUNC[1/671]: 0x440958 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:07:20.893 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:20.893 #4 NEW cov: 11601 ft: 11602 corp: 2/33b lim: 35 exec/s: 0 rss: 68Mb L: 32/32 MS: 2 CopyPart-InsertRepeatedBytes- 00:07:20.893 [2024-11-17 23:07:17.469808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.893 [2024-11-17 23:07:17.469843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:20.893 [2024-11-17 23:07:17.469897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:20.893 [2024-11-17 23:07:17.469910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:20.893 #9 NEW cov: 11714 ft: 12444 corp: 3/48b lim: 35 exec/s: 0 rss: 68Mb L: 15/32 MS: 5 CopyPart-EraseBytes-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:21.152 [2024-11-17 23:07:17.510174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.510201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.510253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.510266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.510318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.510331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.510382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.510395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.152 #10 NEW cov: 11720 ft: 12634 corp: 4/80b lim: 35 exec/s: 0 rss: 68Mb L: 32/32 MS: 1 ChangeBit- 00:07:21.152 [2024-11-17 23:07:17.550251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.550277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.550332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.550345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.550395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.550409] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.550460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:302e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.550473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.152 #11 NEW cov: 11805 ft: 12878 corp: 5/113b lim: 35 exec/s: 0 rss: 68Mb L: 33/33 MS: 1 InsertByte- 00:07:21.152 [2024-11-17 23:07:17.590395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.590419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.152 [2024-11-17 23:07:17.590489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.152 [2024-11-17 23:07:17.590502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.590559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.590573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.590623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:302e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.590636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.153 #12 NEW cov: 11805 ft: 12942 corp: 6/144b lim: 35 exec/s: 0 rss: 68Mb L: 31/33 MS: 1 EraseBytes- 00:07:21.153 [2024-11-17 23:07:17.630498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e200a0a cdw11:002e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.630524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.630600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.630615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.630666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.630679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.630731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.630744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.153 #13 NEW cov: 11805 ft: 13022 corp: 7/176b lim: 35 exec/s: 0 rss: 68Mb L: 32/33 MS: 1 ChangeBinInt- 00:07:21.153 [2024-11-17 23:07:17.670629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.670654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.670707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.670720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.670771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.670783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.670835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.670847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.153 #14 NEW cov: 11805 ft: 13191 corp: 8/208b lim: 35 exec/s: 0 rss: 68Mb L: 32/33 MS: 1 ShuffleBytes- 00:07:21.153 [2024-11-17 23:07:17.710772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.710798] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.710853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.710867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.710918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.710931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.710983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e262e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.710995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.153 #15 NEW cov: 11805 ft: 13227 corp: 9/240b lim: 35 exec/s: 0 rss: 68Mb L: 32/33 MS: 1 ChangeBit- 00:07:21.153 [2024-11-17 23:07:17.750748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.750774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.750828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.750841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.153 [2024-11-17 23:07:17.750890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e300000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.153 [2024-11-17 23:07:17.750903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.412 #16 NEW cov: 11805 ft: 13482 corp: 10/267b lim: 35 exec/s: 0 rss: 68Mb L: 27/33 MS: 1 EraseBytes- 00:07:21.412 [2024-11-17 23:07:17.790973] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.790998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.791051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:cc2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.791064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.791115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.791128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.791179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.791192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.412 #17 NEW cov: 11805 ft: 13528 corp: 11/299b lim: 35 exec/s: 0 rss: 68Mb L: 32/33 MS: 1 ChangeByte- 00:07:21.412 [2024-11-17 23:07:17.831073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.831098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.831166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e6e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.831180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.831232] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.831244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.831296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.831309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.412 #18 NEW cov: 11805 ft: 13591 corp: 12/331b lim: 35 exec/s: 0 rss: 68Mb L: 32/33 MS: 1 ChangeBit- 00:07:21.412 [2024-11-17 23:07:17.871216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.871240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.871309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2ead2e cdw11:2ecc0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.412 [2024-11-17 23:07:17.871324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.412 [2024-11-17 23:07:17.871374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.871388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.871439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.871452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.413 #19 NEW cov: 11805 ft: 13707 corp: 13/364b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 InsertByte- 00:07:21.413 [2024-11-17 23:07:17.911157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.911182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.911234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.911247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.911298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e302e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.911311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.413 #25 NEW cov: 11805 ft: 13746 corp: 14/389b lim: 35 exec/s: 0 rss: 69Mb L: 25/33 MS: 1 EraseBytes- 00:07:21.413 [2024-11-17 23:07:17.951459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.951486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.951543] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:cc2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.951556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.951608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.951622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.951672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.951684] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.413 #26 NEW cov: 11805 ft: 13770 corp: 15/421b lim: 35 exec/s: 0 rss: 69Mb L: 32/33 MS: 1 ChangeBit- 00:07:21.413 [2024-11-17 23:07:17.991516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.991544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.991598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.991612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.991668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.991682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.413 [2024-11-17 23:07:17.991732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e262e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.413 [2024-11-17 23:07:17.991745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.413 #27 NEW cov: 11805 ft: 13809 corp: 16/453b lim: 35 exec/s: 0 rss: 69Mb L: 32/33 MS: 1 ChangeByte- 00:07:21.672 [2024-11-17 23:07:18.031665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.031689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.031742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2ead3e cdw11:2ecc0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.031756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.031808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.031821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.031872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.031888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.672 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:21.672 #28 NEW cov: 11828 ft: 13832 corp: 17/486b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 ChangeBit- 00:07:21.672 [2024-11-17 23:07:18.071923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.071948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.072000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.072013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.072064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.072078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.072128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.072140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.072191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.072203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:21.672 #29 NEW cov: 11828 ft: 13891 corp: 18/521b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:21.672 [2024-11-17 23:07:18.111904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.111928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.111984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.111997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.112048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.112061] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.112114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2f2e2e cdw11:302e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.112126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.672 #30 NEW cov: 11828 ft: 13975 corp: 19/554b lim: 35 exec/s: 0 rss: 69Mb L: 33/35 MS: 1 ChangeBit- 00:07:21.672 [2024-11-17 23:07:18.152033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.152058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.152110] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e6e2e cdw11:2e2e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.152126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.152177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.152189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.152240] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:6e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.152252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.672 #31 NEW cov: 11828 ft: 13996 corp: 20/588b lim: 35 exec/s: 31 rss: 69Mb L: 34/35 MS: 1 CopyPart- 00:07:21.672 [2024-11-17 23:07:18.192139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.192164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.192217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e6e2b cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.192231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.672 [2024-11-17 23:07:18.192283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.672 [2024-11-17 23:07:18.192296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.192345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.192358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.673 #32 NEW cov: 11828 ft: 14020 corp: 21/620b lim: 35 exec/s: 32 rss: 69Mb L: 32/35 MS: 1 ChangeBinInt- 00:07:21.673 [2024-11-17 23:07:18.232273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.232298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.232351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2ef60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.232364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.232417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.232430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.232483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e262e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.232496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.673 #33 NEW cov: 11828 ft: 14029 corp: 22/652b lim: 35 exec/s: 33 rss: 69Mb L: 32/35 MS: 1 ChangeByte- 00:07:21.673 [2024-11-17 23:07:18.272369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.272396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.272448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e6e2e cdw11:2e2e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.272461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.272511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.272524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.673 [2024-11-17 23:07:18.272581] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:6e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.673 [2024-11-17 23:07:18.272594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.932 #34 NEW cov: 11828 ft: 14033 corp: 23/686b lim: 35 exec/s: 34 rss: 69Mb L: 34/35 MS: 1 ChangeByte- 00:07:21.932 [2024-11-17 23:07:18.312488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d1d1f6f5 cdw11:d1d10003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.312514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.932 [2024-11-17 23:07:18.312572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2ece2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.312585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.932 [2024-11-17 23:07:18.312636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.312666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.932 [2024-11-17 23:07:18.312720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:302e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.312733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.932 #35 NEW cov: 11828 ft: 14038 corp: 24/717b lim: 35 exec/s: 35 rss: 69Mb L: 31/35 MS: 1 ChangeBinInt- 00:07:21.932 [2024-11-17 23:07:18.352628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.352653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.932 [2024-11-17 23:07:18.352707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:cc2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.352720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.932 [2024-11-17 23:07:18.352772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.352785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.932 [2024-11-17 23:07:18.352837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.352853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.932 #36 NEW cov: 11828 ft: 14061 corp: 25/750b lim: 35 exec/s: 36 rss: 69Mb L: 33/35 MS: 1 InsertByte- 00:07:21.932 [2024-11-17 23:07:18.392759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.932 [2024-11-17 23:07:18.392783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.392836] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.392850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.392900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2c2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.392913] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.392963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.392976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.933 #37 NEW cov: 11828 ft: 14111 corp: 26/782b lim: 35 exec/s: 37 rss: 69Mb L: 32/35 MS: 1 ChangeBit- 00:07:21.933 [2024-11-17 23:07:18.432854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.432879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.432928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.432941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.432994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.433007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.433059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:ec302e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.433072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.933 #38 NEW cov: 11828 ft: 14114 corp: 27/814b lim: 35 exec/s: 38 rss: 69Mb L: 32/35 MS: 1 InsertByte- 00:07:21.933 [2024-11-17 23:07:18.472838] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:e3e3e3e3 cdw11:e3e30003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.472863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.472916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:e3e3e3e3 cdw11:e3e30003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.472929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.472980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:e3e3e3e3 cdw11:e3e30003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.472995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.933 #43 NEW cov: 11828 ft: 14140 corp: 28/840b lim: 35 exec/s: 43 rss: 69Mb L: 26/35 MS: 5 ChangeBinInt-InsertRepeatedBytes-EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:21.933 [2024-11-17 23:07:18.513092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.513116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.513169] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2a2e6e2e cdw11:2e2e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.513182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.513234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.513248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:21.933 [2024-11-17 23:07:18.513298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:6e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:21.933 [2024-11-17 23:07:18.513311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:21.933 #49 NEW cov: 11828 ft: 14147 corp: 29/874b lim: 35 exec/s: 49 rss: 69Mb L: 34/35 MS: 1 ChangeBinInt- 00:07:22.193 [2024-11-17 23:07:18.553205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.553229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.553283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.553296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.553347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.553360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.553411] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.553424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.193 #50 NEW cov: 11828 ft: 14170 corp: 30/906b lim: 35 exec/s: 50 rss: 69Mb L: 32/35 MS: 1 ShuffleBytes- 00:07:22.193 [2024-11-17 23:07:18.593184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.593209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.593261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.593275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.593326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.593359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.193 #51 NEW cov: 11828 ft: 14238 corp: 31/932b lim: 35 exec/s: 51 rss: 69Mb L: 26/35 MS: 1 EraseBytes- 00:07:22.193 [2024-11-17 23:07:18.633606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:d1d12ed2 cdw11:cb2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.633630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.633683] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.633697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.633747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.633760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.633810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.633823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.633874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.633887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.193 #52 NEW cov: 11828 ft: 14273 corp: 32/967b lim: 35 exec/s: 52 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:22.193 [2024-11-17 23:07:18.673706] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.673730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.673782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2a2e6e2e cdw11:2e2e0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.673795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.673846] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.673859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.673912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2a6e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.673924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.673975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.673988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.193 #53 NEW cov: 11828 ft: 14290 corp: 33/1002b lim: 35 exec/s: 53 rss: 70Mb L: 35/35 MS: 1 CopyPart- 00:07:22.193 [2024-11-17 23:07:18.713673] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.713700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.713753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.713766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.713817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.713831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.713882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e262e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.713895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.193 #54 NEW cov: 11828 ft: 14299 corp: 34/1034b lim: 35 exec/s: 54 rss: 70Mb L: 32/35 MS: 1 ShuffleBytes- 00:07:22.193 [2024-11-17 23:07:18.753359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0a0a1a0a cdw11:0e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.753385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 #59 NEW cov: 11828 ft: 15071 corp: 35/1044b lim: 35 exec/s: 59 rss: 70Mb L: 10/35 MS: 5 CopyPart-ChangeBit-CopyPart-EraseBytes-CrossOver- 00:07:22.193 [2024-11-17 23:07:18.793979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.794005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.794056] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:cc2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.794070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.794122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:01002e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.794136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.193 [2024-11-17 23:07:18.794186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.193 [2024-11-17 23:07:18.794199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.453 #60 NEW cov: 11828 ft: 15084 corp: 36/1076b lim: 35 exec/s: 60 rss: 70Mb L: 32/35 MS: 1 CMP- DE: "\001\000"- 00:07:22.453 [2024-11-17 23:07:18.833811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.833836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.833889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.833902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.453 #61 NEW cov: 11828 ft: 15090 corp: 37/1091b lim: 35 exec/s: 61 rss: 70Mb L: 15/35 MS: 1 ShuffleBytes- 00:07:22.453 [2024-11-17 23:07:18.874179] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.874203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.874271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2ef60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.874285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.874337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2c2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.874350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.874404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e26 cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.874417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.914319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.914344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.914399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2ef60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.914412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.914465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2c2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.914478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.453 [2024-11-17 23:07:18.914530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e26 cdw11:372e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.453 [2024-11-17 23:07:18.914547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.453 #63 NEW cov: 11828 ft: 15092 corp: 38/1124b lim: 35 exec/s: 63 rss: 70Mb L: 33/35 MS: 2 InsertByte-ChangeBinInt- 00:07:22.454 [2024-11-17 23:07:18.954544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.954570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.954623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:cc2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.954637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.954689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.954702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.954754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:6e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.954769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.954820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.954832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.454 #64 NEW cov: 11828 ft: 15097 corp: 39/1159b lim: 35 exec/s: 64 rss: 70Mb L: 35/35 MS: 1 CrossOver- 00:07:22.454 [2024-11-17 23:07:18.994670] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00002e00 cdw11:232e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.994695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.994748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.994762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.994815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.994828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.994879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.994892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:18.994941] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:18.994954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.454 #65 NEW cov: 11828 ft: 15103 corp: 40/1194b lim: 35 exec/s: 65 rss: 70Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:22.454 [2024-11-17 23:07:19.034623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:19.034649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:19.034701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:19.034715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:19.034771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:19.034784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.454 [2024-11-17 23:07:19.034833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:262e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.454 [2024-11-17 23:07:19.034846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.454 #66 NEW cov: 11828 ft: 15133 corp: 41/1228b lim: 35 exec/s: 66 rss: 70Mb L: 34/35 MS: 1 CopyPart- 00:07:22.714 [2024-11-17 23:07:19.074758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.074786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.074855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2ead3e cdw11:2ecc0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.074868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.074922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.074935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.074989] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:7a2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.075002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.714 #67 NEW cov: 11828 ft: 15137 corp: 42/1262b lim: 35 exec/s: 67 rss: 70Mb L: 34/35 MS: 1 InsertByte- 00:07:22.714 [2024-11-17 23:07:19.115000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2e2e0a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.115024] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.115078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.115093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.115145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.115158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.115211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e002e2e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.115224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.115277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:002e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.115290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.714 #68 NEW cov: 11828 ft: 15144 corp: 43/1297b lim: 35 exec/s: 68 rss: 70Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:22.714 [2024-11-17 23:07:19.155136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:01000a0a cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.155161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.155215] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.155228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.155281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.155310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.155364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.155377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:22.714 [2024-11-17 23:07:19.155430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:2e2e2e2e cdw11:2e2e0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:22.714 [2024-11-17 23:07:19.155443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:22.714 #69 NEW cov: 11828 ft: 15151 corp: 44/1332b lim: 35 exec/s: 34 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\001\000"- 00:07:22.714 #69 DONE cov: 11828 ft: 15151 corp: 44/1332b lim: 35 exec/s: 34 rss: 70Mb 00:07:22.714 ###### Recommended dictionary. ###### 00:07:22.714 "\001\000" # Uses: 1 00:07:22.714 ###### End of recommended dictionary. ###### 00:07:22.714 Done 69 runs in 2 second(s) 00:07:22.714 23:07:19 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_4.conf 00:07:22.714 23:07:19 -- ../common.sh@72 -- # (( i++ )) 00:07:22.714 23:07:19 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:22.714 23:07:19 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:07:22.714 23:07:19 -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:07:22.714 23:07:19 -- nvmf/run.sh@24 -- # local timen=1 00:07:22.714 23:07:19 -- nvmf/run.sh@25 -- # local core=0x1 00:07:22.714 23:07:19 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:22.714 23:07:19 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:07:22.714 23:07:19 -- nvmf/run.sh@29 -- # printf %02d 5 00:07:22.714 23:07:19 -- nvmf/run.sh@29 -- # port=4405 00:07:22.714 23:07:19 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:22.714 23:07:19 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:07:22.714 23:07:19 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:22.714 23:07:19 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 -r /var/tmp/spdk5.sock 00:07:22.973 [2024-11-17 23:07:19.332970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:22.973 [2024-11-17 23:07:19.333059] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298990 ] 00:07:22.973 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.973 [2024-11-17 23:07:19.510837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.973 [2024-11-17 23:07:19.573501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.973 [2024-11-17 23:07:19.573631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.232 [2024-11-17 23:07:19.631617] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.232 [2024-11-17 23:07:19.647946] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:07:23.232 INFO: Running with entropic power schedule (0xFF, 100). 00:07:23.232 INFO: Seed: 3276260909 00:07:23.232 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:23.232 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:23.232 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:07:23.232 INFO: A corpus is not provided, starting from an empty corpus 00:07:23.232 #2 INITED exec/s: 0 rss: 60Mb 00:07:23.232 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:23.232 This may also happen if the target rejected all inputs we tried so far 00:07:23.232 [2024-11-17 23:07:19.697055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.233 [2024-11-17 23:07:19.697084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.492 NEW_FUNC[1/671]: 0x442af8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:07:23.492 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:23.492 #26 NEW cov: 11608 ft: 11609 corp: 2/10b lim: 45 exec/s: 0 rss: 68Mb L: 9/9 MS: 4 CopyPart-InsertByte-EraseBytes-InsertRepeatedBytes- 00:07:23.492 [2024-11-17 23:07:19.997737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.492 [2024-11-17 23:07:19.997770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.492 #29 NEW cov: 11725 ft: 12205 corp: 3/24b lim: 45 exec/s: 0 rss: 68Mb L: 14/14 MS: 3 InsertRepeatedBytes-ChangeByte-CopyPart- 00:07:23.492 [2024-11-17 23:07:20.037856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.492 [2024-11-17 23:07:20.037886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.492 #30 NEW cov: 11731 ft: 12405 corp: 4/34b lim: 45 exec/s: 0 rss: 68Mb L: 10/14 MS: 1 InsertByte- 00:07:23.492 [2024-11-17 23:07:20.077950] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc40a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.492 [2024-11-17 23:07:20.077980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.492 #32 NEW cov: 11816 ft: 12689 corp: 5/45b lim: 45 exec/s: 0 rss: 68Mb L: 11/14 MS: 2 ChangeByte-CrossOver- 00:07:23.752 [2024-11-17 23:07:20.118090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffc40a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.118118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.752 #33 NEW cov: 11816 ft: 12805 corp: 6/56b lim: 45 exec/s: 0 rss: 68Mb L: 11/14 MS: 1 CopyPart- 00:07:23.752 [2024-11-17 23:07:20.158186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffd40aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.158211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.752 #34 NEW cov: 11816 ft: 12937 corp: 7/65b lim: 45 exec/s: 0 rss: 68Mb L: 9/14 MS: 1 ChangeByte- 00:07:23.752 [2024-11-17 23:07:20.198311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.198337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.752 #35 NEW cov: 11816 ft: 13021 corp: 8/78b lim: 45 exec/s: 0 rss: 68Mb L: 13/14 MS: 1 EraseBytes- 00:07:23.752 [2024-11-17 23:07:20.238430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.238455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.752 #36 NEW cov: 11816 ft: 13065 corp: 9/91b lim: 45 exec/s: 0 rss: 68Mb L: 13/14 MS: 1 ShuffleBytes- 00:07:23.752 [2024-11-17 23:07:20.279059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.279089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.752 [2024-11-17 23:07:20.279144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.279158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:23.752 [2024-11-17 23:07:20.279211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.279225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:23.752 [2024-11-17 23:07:20.279279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.279292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:23.752 #37 NEW cov: 11816 ft: 13966 corp: 10/127b lim: 45 exec/s: 0 rss: 68Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:07:23.752 [2024-11-17 23:07:20.318622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff760001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.318648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:23.752 #38 NEW cov: 11816 ft: 13993 corp: 11/140b lim: 45 exec/s: 0 rss: 68Mb L: 13/36 MS: 1 CMP- DE: "v<\016\260\330\177\000\000"- 00:07:23.752 [2024-11-17 23:07:20.358803] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:23.752 [2024-11-17 23:07:20.358830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.011 #39 NEW cov: 11816 ft: 14024 corp: 12/153b lim: 45 exec/s: 0 rss: 68Mb L: 13/36 MS: 1 InsertRepeatedBytes- 00:07:24.011 [2024-11-17 23:07:20.399044] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0affff11 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.399070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.012 [2024-11-17 23:07:20.399140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40400007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.399154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.012 #41 NEW cov: 11816 ft: 14290 corp: 13/171b lim: 45 exec/s: 0 rss: 68Mb L: 18/36 MS: 2 EraseBytes-InsertRepeatedBytes- 00:07:24.012 [2024-11-17 23:07:20.438958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.438983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.012 #42 NEW cov: 11816 ft: 14348 corp: 14/184b lim: 45 exec/s: 0 rss: 69Mb L: 13/36 MS: 1 ShuffleBytes- 00:07:24.012 [2024-11-17 23:07:20.479124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff6d0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.479150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.012 #43 NEW cov: 11816 ft: 14373 corp: 15/197b lim: 45 exec/s: 0 rss: 69Mb L: 13/36 MS: 1 ChangeByte- 00:07:24.012 [2024-11-17 23:07:20.509198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.509226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.012 #44 NEW cov: 11816 ft: 14392 corp: 16/211b lim: 45 exec/s: 0 rss: 69Mb L: 14/36 MS: 1 CrossOver- 00:07:24.012 [2024-11-17 23:07:20.549317] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.549342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.012 #45 NEW cov: 11816 ft: 14417 corp: 17/224b lim: 45 exec/s: 0 rss: 69Mb L: 13/36 MS: 1 ShuffleBytes- 00:07:24.012 [2024-11-17 23:07:20.589467] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.012 [2024-11-17 23:07:20.589493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.012 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:24.012 #46 NEW cov: 11839 ft: 14471 corp: 18/234b lim: 45 exec/s: 0 rss: 69Mb L: 10/36 MS: 1 CopyPart- 00:07:24.271 [2024-11-17 23:07:20.629550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:93ffc40a cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.629575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.271 #47 NEW cov: 11839 ft: 14584 corp: 19/245b lim: 45 exec/s: 0 rss: 69Mb L: 11/36 MS: 1 ChangeByte- 00:07:24.271 [2024-11-17 23:07:20.670166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.670192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.670261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.670275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.670311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.670324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.670378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.670391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.271 #48 NEW cov: 11839 ft: 14645 corp: 20/286b lim: 45 exec/s: 48 rss: 69Mb L: 41/41 MS: 1 InsertRepeatedBytes- 00:07:24.271 [2024-11-17 23:07:20.719821] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:baff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.719846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.271 #49 NEW cov: 11839 ft: 14651 corp: 21/300b lim: 45 exec/s: 49 rss: 69Mb L: 14/41 MS: 1 InsertByte- 00:07:24.271 [2024-11-17 23:07:20.760234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0affff11 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.760259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.760319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.760333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.760388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:40404040 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.760402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.271 #50 NEW cov: 11839 ft: 14894 corp: 22/328b lim: 45 exec/s: 50 rss: 69Mb L: 28/41 MS: 1 CopyPart- 00:07:24.271 [2024-11-17 23:07:20.810075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0effffff cdw11:ffff0001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.810101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.271 #51 NEW cov: 11839 ft: 14933 corp: 23/341b lim: 45 exec/s: 51 rss: 69Mb L: 13/41 MS: 1 ShuffleBytes- 00:07:24.271 [2024-11-17 23:07:20.850843] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.850868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.850924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00ff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.850937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.271 [2024-11-17 23:07:20.850992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.271 [2024-11-17 23:07:20.851006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.272 [2024-11-17 23:07:20.851060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.272 [2024-11-17 23:07:20.851073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.272 [2024-11-17 23:07:20.851124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.272 [2024-11-17 23:07:20.851137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.272 #52 NEW cov: 11839 ft: 14997 corp: 24/386b lim: 45 exec/s: 52 rss: 69Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:07:24.531 [2024-11-17 23:07:20.900518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:0affff11 cdw11:40c00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.900547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:20.900617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:40404040 cdw11:40400007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.900631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.531 #53 NEW cov: 11839 ft: 15009 corp: 25/404b lim: 45 exec/s: 53 rss: 69Mb L: 18/45 MS: 1 ChangeBinInt- 00:07:24.531 [2024-11-17 23:07:20.940454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.940478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.531 #54 NEW cov: 11839 ft: 15017 corp: 26/417b lim: 45 exec/s: 54 rss: 69Mb L: 13/45 MS: 1 EraseBytes- 00:07:24.531 [2024-11-17 23:07:20.981031] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.981055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:20.981151] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.981166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:20.981220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.981233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:20.981287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:20.981300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.531 #55 NEW cov: 11839 ft: 15065 corp: 27/461b lim: 45 exec/s: 55 rss: 69Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:07:24.531 [2024-11-17 23:07:21.031335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cfcfff11 cdw11:cfcf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.031360] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.031414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.031428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.031481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff40cf0a cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.031495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.031553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:40404040 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.031566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.031619] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:40404040 cdw11:40400007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.031632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:24.531 #56 NEW cov: 11839 ft: 15110 corp: 28/506b lim: 45 exec/s: 56 rss: 69Mb L: 45/45 MS: 1 InsertRepeatedBytes- 00:07:24.531 [2024-11-17 23:07:21.081318] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.081344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.081416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.081434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.081488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.081501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.081555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.081569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.531 #57 NEW cov: 11839 ft: 15120 corp: 29/547b lim: 45 exec/s: 57 rss: 69Mb L: 41/45 MS: 1 ShuffleBytes- 00:07:24.531 [2024-11-17 23:07:21.121157] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.121181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.531 [2024-11-17 23:07:21.121235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.531 [2024-11-17 23:07:21.121248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.791 #58 NEW cov: 11839 ft: 15124 corp: 30/571b lim: 45 exec/s: 58 rss: 70Mb L: 24/45 MS: 1 EraseBytes- 00:07:24.791 [2024-11-17 23:07:21.161617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff6d0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.161642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.161697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.161711] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.161766] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.161779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.161834] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.161847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.791 #59 NEW cov: 11839 ft: 15148 corp: 31/611b lim: 45 exec/s: 59 rss: 70Mb L: 40/45 MS: 1 CrossOver- 00:07:24.791 [2024-11-17 23:07:21.211731] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ff710000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.211756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.211828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.211842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.211898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.211914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.211965] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00ff0000 cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.211979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.791 #60 NEW cov: 11839 ft: 15156 corp: 32/655b lim: 45 exec/s: 60 rss: 70Mb L: 44/45 MS: 1 ChangeByte- 00:07:24.791 [2024-11-17 23:07:21.251378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:763c0aff cdw11:0eb00006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.251402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.791 #61 NEW cov: 11839 ft: 15168 corp: 33/665b lim: 45 exec/s: 61 rss: 70Mb L: 10/45 MS: 1 PersAutoDict- DE: "v<\016\260\330\177\000\000"- 00:07:24.791 [2024-11-17 23:07:21.291980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.292004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.292058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.292071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.292122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:40400aff cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.292135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.292190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:40404040 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.292203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.791 #62 NEW cov: 11839 ft: 15183 corp: 34/709b lim: 45 exec/s: 62 rss: 70Mb L: 44/45 MS: 1 InsertRepeatedBytes- 00:07:24.791 [2024-11-17 23:07:21.331582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.331606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.791 #63 NEW cov: 11839 ft: 15189 corp: 35/722b lim: 45 exec/s: 63 rss: 70Mb L: 13/45 MS: 1 InsertRepeatedBytes- 00:07:24.791 [2024-11-17 23:07:21.372219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.372244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.372298] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.372312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.372365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.372378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:24.791 [2024-11-17 23:07:21.372432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:24.791 [2024-11-17 23:07:21.372448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:24.791 #64 NEW cov: 11839 ft: 15207 corp: 36/763b lim: 45 exec/s: 64 rss: 70Mb L: 41/45 MS: 1 ChangeByte- 00:07:25.051 [2024-11-17 23:07:21.411831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffd40aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.411857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 #65 NEW cov: 11839 ft: 15231 corp: 37/772b lim: 45 exec/s: 65 rss: 70Mb L: 9/45 MS: 1 CMP- DE: "\036\000"- 00:07:25.051 [2024-11-17 23:07:21.452478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0aff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.452503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.452576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.452590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.452646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.452659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.452714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ff56ffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.452726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.051 #66 NEW cov: 11839 ft: 15364 corp: 38/808b lim: 45 exec/s: 66 rss: 70Mb L: 36/45 MS: 1 ChangeByte- 00:07:25.051 [2024-11-17 23:07:21.492248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:01200006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.492272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.492329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.492343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.051 #67 NEW cov: 11839 ft: 15379 corp: 39/829b lim: 45 exec/s: 67 rss: 70Mb L: 21/45 MS: 1 CMP- DE: "\377\377\377\377\001 \307\336"- 00:07:25.051 [2024-11-17 23:07:21.532701] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ff6d0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.532726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.532783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.532796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.532851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:faff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.532864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.532920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.532933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.051 #68 NEW cov: 11839 ft: 15412 corp: 40/869b lim: 45 exec/s: 68 rss: 70Mb L: 40/45 MS: 1 ChangeBinInt- 00:07:25.051 [2024-11-17 23:07:21.572987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:cfcfff11 cdw11:cfcf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.573011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.573069] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:cfcfcfcf cdw11:cfcf0006 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.573083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.573137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:ff40cf0a cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.573151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.573205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:40404040 cdw11:40400002 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.573218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.573272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:40404040 cdw11:40400007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.573285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.051 #69 NEW cov: 11839 ft: 15420 corp: 41/914b lim: 45 exec/s: 69 rss: 70Mb L: 45/45 MS: 1 CopyPart- 00:07:25.051 [2024-11-17 23:07:21.622769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:01013a01 cdw11:01010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.622794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.622849] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:01010101 cdw11:01010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.622862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.622934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:01010101 cdw11:01010000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.622948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.051 #72 NEW cov: 11839 ft: 15503 corp: 42/944b lim: 45 exec/s: 72 rss: 70Mb L: 30/45 MS: 3 InsertByte-ChangeByte-InsertRepeatedBytes- 00:07:25.051 [2024-11-17 23:07:21.663088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.663113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.663170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c7de0120 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.663185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.051 [2024-11-17 23:07:21.663241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000001 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.051 [2024-11-17 23:07:21.663255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.312 [2024-11-17 23:07:21.663310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.312 [2024-11-17 23:07:21.663324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.312 [2024-11-17 23:07:21.703327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ff0effff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.312 [2024-11-17 23:07:21.703352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:25.312 [2024-11-17 23:07:21.703407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:c7de0120 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.312 [2024-11-17 23:07:21.703420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:25.312 [2024-11-17 23:07:21.703474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.312 [2024-11-17 23:07:21.703487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:25.312 [2024-11-17 23:07:21.703544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:0000003b cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.312 [2024-11-17 23:07:21.703557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:25.312 [2024-11-17 23:07:21.703612] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:8 nsid:0 cdw10:3c760000 cdw11:b0d80003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:25.312 [2024-11-17 23:07:21.703626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:25.312 #74 NEW cov: 11839 ft: 15510 corp: 43/989b lim: 45 exec/s: 37 rss: 70Mb L: 45/45 MS: 2 PersAutoDict-CMP- DE: "\377\377\377\377\001 \307\336"-"\000\000\002\000"- 00:07:25.312 #74 DONE cov: 11839 ft: 15510 corp: 43/989b lim: 45 exec/s: 37 rss: 70Mb 00:07:25.312 ###### Recommended dictionary. ###### 00:07:25.312 "v<\016\260\330\177\000\000" # Uses: 1 00:07:25.312 "\036\000" # Uses: 0 00:07:25.312 "\377\377\377\377\001 \307\336" # Uses: 1 00:07:25.312 "\000\000\002\000" # Uses: 0 00:07:25.312 ###### End of recommended dictionary. ###### 00:07:25.312 Done 74 runs in 2 second(s) 00:07:25.312 23:07:21 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_5.conf 00:07:25.312 23:07:21 -- ../common.sh@72 -- # (( i++ )) 00:07:25.312 23:07:21 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:25.312 23:07:21 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:07:25.312 23:07:21 -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:07:25.312 23:07:21 -- nvmf/run.sh@24 -- # local timen=1 00:07:25.312 23:07:21 -- nvmf/run.sh@25 -- # local core=0x1 00:07:25.312 23:07:21 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:25.312 23:07:21 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:07:25.312 23:07:21 -- nvmf/run.sh@29 -- # printf %02d 6 00:07:25.312 23:07:21 -- nvmf/run.sh@29 -- # port=4406 00:07:25.312 23:07:21 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:25.312 23:07:21 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:07:25.312 23:07:21 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:25.312 23:07:21 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 -r /var/tmp/spdk6.sock 00:07:25.312 [2024-11-17 23:07:21.884669] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.312 [2024-11-17 23:07:21.884745] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299296 ] 00:07:25.312 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.571 [2024-11-17 23:07:22.073498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.571 [2024-11-17 23:07:22.136749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.571 [2024-11-17 23:07:22.136893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.831 [2024-11-17 23:07:22.194896] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.831 [2024-11-17 23:07:22.211222] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:07:25.831 INFO: Running with entropic power schedule (0xFF, 100). 00:07:25.831 INFO: Seed: 1546294692 00:07:25.831 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:25.831 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:25.831 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:07:25.831 INFO: A corpus is not provided, starting from an empty corpus 00:07:25.831 #2 INITED exec/s: 0 rss: 60Mb 00:07:25.831 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:25.831 This may also happen if the target rejected all inputs we tried so far 00:07:25.831 [2024-11-17 23:07:22.287226] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a3f cdw11:00000000 00:07:25.831 [2024-11-17 23:07:22.287262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.091 NEW_FUNC[1/669]: 0x445308 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:07:26.091 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:26.091 #4 NEW cov: 11529 ft: 11528 corp: 2/3b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 2 CrossOver-InsertByte- 00:07:26.091 [2024-11-17 23:07:22.618142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 00:07:26.091 [2024-11-17 23:07:22.618189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.091 #5 NEW cov: 11642 ft: 12025 corp: 3/5b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBinInt- 00:07:26.091 [2024-11-17 23:07:22.668086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 00:07:26.091 [2024-11-17 23:07:22.668113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.091 #6 NEW cov: 11648 ft: 12380 corp: 4/7b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:26.351 [2024-11-17 23:07:22.708227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.708256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.351 #7 NEW cov: 11733 ft: 12624 corp: 5/9b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:07:26.351 [2024-11-17 23:07:22.748332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.748359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.351 #8 NEW cov: 11733 ft: 12732 corp: 6/11b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeByte- 00:07:26.351 [2024-11-17 23:07:22.788502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002202 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.788529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.351 #9 NEW cov: 11733 ft: 12790 corp: 7/13b lim: 10 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ChangeBit- 00:07:26.351 [2024-11-17 23:07:22.829373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.829399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.351 [2024-11-17 23:07:22.829505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.829521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.351 [2024-11-17 23:07:22.829650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.829667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.351 [2024-11-17 23:07:22.829781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.829797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.351 [2024-11-17 23:07:22.829900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000003f cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.829916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.351 #10 NEW cov: 11733 ft: 13175 corp: 8/23b lim: 10 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:07:26.351 [2024-11-17 23:07:22.878732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.878758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.351 #11 NEW cov: 11733 ft: 13302 corp: 9/25b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:26.351 [2024-11-17 23:07:22.918904] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000101 cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.918930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.351 #16 NEW cov: 11733 ft: 13347 corp: 10/27b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 5 EraseBytes-ShuffleBytes-ChangeBinInt-ChangeBit-CopyPart- 00:07:26.351 [2024-11-17 23:07:22.959067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000f43f cdw11:00000000 00:07:26.351 [2024-11-17 23:07:22.959093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.610 #17 NEW cov: 11733 ft: 13363 corp: 11/29b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:26.610 [2024-11-17 23:07:22.999139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000012 cdw11:00000000 00:07:26.610 [2024-11-17 23:07:22.999164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.610 #18 NEW cov: 11733 ft: 13373 corp: 12/31b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 1 ChangeBit- 00:07:26.610 [2024-11-17 23:07:23.039254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.039285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.610 #19 NEW cov: 11733 ft: 13389 corp: 13/33b lim: 10 exec/s: 0 rss: 68Mb L: 2/10 MS: 1 InsertByte- 00:07:26.610 [2024-11-17 23:07:23.079861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00006666 cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.079887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.610 [2024-11-17 23:07:23.079992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00006666 cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.080008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.610 [2024-11-17 23:07:23.080121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.080139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.610 #20 NEW cov: 11733 ft: 13561 corp: 14/39b lim: 10 exec/s: 0 rss: 68Mb L: 6/10 MS: 1 InsertRepeatedBytes- 00:07:26.610 [2024-11-17 23:07:23.119552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002202 cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.119578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.610 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:26.610 #21 NEW cov: 11756 ft: 13589 corp: 15/41b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 ShuffleBytes- 00:07:26.610 [2024-11-17 23:07:23.159815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.159840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.610 [2024-11-17 23:07:23.159960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.159975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.610 #22 NEW cov: 11756 ft: 13805 corp: 16/45b lim: 10 exec/s: 0 rss: 69Mb L: 4/10 MS: 1 CopyPart- 00:07:26.610 [2024-11-17 23:07:23.199810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000825 cdw11:00000000 00:07:26.610 [2024-11-17 23:07:23.199836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 #23 NEW cov: 11756 ft: 13820 corp: 17/47b lim: 10 exec/s: 0 rss: 69Mb L: 2/10 MS: 1 ChangeBit- 00:07:26.870 [2024-11-17 23:07:23.239892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000100 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.239918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 #24 NEW cov: 11756 ft: 13846 corp: 18/49b lim: 10 exec/s: 24 rss: 69Mb L: 2/10 MS: 1 ChangeBit- 00:07:26.870 [2024-11-17 23:07:23.280856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.280882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.280999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.281015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.281128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.281147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.281252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004000 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.281269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.281377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000003f cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.281393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.870 #25 NEW cov: 11756 ft: 13856 corp: 19/59b lim: 10 exec/s: 25 rss: 69Mb L: 10/10 MS: 1 ChangeByte- 00:07:26.870 [2024-11-17 23:07:23.320324] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.320351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.320470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff02 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.320486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.870 #30 NEW cov: 11756 ft: 13887 corp: 20/63b lim: 10 exec/s: 30 rss: 69Mb L: 4/10 MS: 5 EraseBytes-ChangeByte-CrossOver-CrossOver-InsertRepeatedBytes- 00:07:26.870 [2024-11-17 23:07:23.361118] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.361143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.361260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.361279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.361388] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.361404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.361519] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.361537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.361650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000c1 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.361665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:26.870 #31 NEW cov: 11756 ft: 13915 corp: 21/73b lim: 10 exec/s: 31 rss: 69Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:26.870 [2024-11-17 23:07:23.400625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003fab cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.400652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 [2024-11-17 23:07:23.400763] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.400780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:26.870 #32 NEW cov: 11756 ft: 13945 corp: 22/77b lim: 10 exec/s: 32 rss: 69Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:26.870 [2024-11-17 23:07:23.450572] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff02 cdw11:00000000 00:07:26.870 [2024-11-17 23:07:23.450603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:26.870 #33 NEW cov: 11756 ft: 13968 corp: 23/79b lim: 10 exec/s: 33 rss: 69Mb L: 2/10 MS: 1 EraseBytes- 00:07:27.130 [2024-11-17 23:07:23.490661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002100 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.490689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.130 #34 NEW cov: 11756 ft: 13990 corp: 24/82b lim: 10 exec/s: 34 rss: 69Mb L: 3/10 MS: 1 InsertByte- 00:07:27.130 [2024-11-17 23:07:23.531593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.531620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.531748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.531764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.531876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.531893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.532003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.532018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.532143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000000c1 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.532161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.130 #35 NEW cov: 11756 ft: 14041 corp: 25/92b lim: 10 exec/s: 35 rss: 69Mb L: 10/10 MS: 1 ShuffleBytes- 00:07:27.130 [2024-11-17 23:07:23.581536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a4a cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.581565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.581678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00004a4a cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.581695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.581801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00004a4a cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.581818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.581927] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00004a4a cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.581943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.130 #36 NEW cov: 11756 ft: 14065 corp: 26/101b lim: 10 exec/s: 36 rss: 69Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:27.130 [2024-11-17 23:07:23.621788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.621817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.621935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ff93 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.621954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.622064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00009393 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.622080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.622190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00009393 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.622207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.130 #37 NEW cov: 11756 ft: 14106 corp: 27/110b lim: 10 exec/s: 37 rss: 69Mb L: 9/10 MS: 1 InsertRepeatedBytes- 00:07:27.130 [2024-11-17 23:07:23.661421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003fab cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.661448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.130 [2024-11-17 23:07:23.661574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.661593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.130 #38 NEW cov: 11756 ft: 14132 corp: 28/114b lim: 10 exec/s: 38 rss: 69Mb L: 4/10 MS: 1 ShuffleBytes- 00:07:27.130 [2024-11-17 23:07:23.701334] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a25 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.701362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.130 #39 NEW cov: 11756 ft: 14145 corp: 29/117b lim: 10 exec/s: 39 rss: 69Mb L: 3/10 MS: 1 CrossOver- 00:07:27.130 [2024-11-17 23:07:23.741547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000202 cdw11:00000000 00:07:27.130 [2024-11-17 23:07:23.741574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.390 #40 NEW cov: 11756 ft: 14233 corp: 30/119b lim: 10 exec/s: 40 rss: 69Mb L: 2/10 MS: 1 CopyPart- 00:07:27.390 [2024-11-17 23:07:23.781842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:000002ab cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.781870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.781987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.782003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.390 #41 NEW cov: 11756 ft: 14241 corp: 31/124b lim: 10 exec/s: 41 rss: 70Mb L: 5/10 MS: 1 CrossOver- 00:07:27.390 [2024-11-17 23:07:23.831980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000b9b9 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.832007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.832116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000b93f cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.832132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.390 #44 NEW cov: 11756 ft: 14245 corp: 32/128b lim: 10 exec/s: 44 rss: 70Mb L: 4/10 MS: 3 EraseBytes-ShuffleBytes-InsertRepeatedBytes- 00:07:27.390 [2024-11-17 23:07:23.882742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.882774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.882895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.882910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.883022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.883041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.883156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00002800 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.883173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.883283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000003f cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.883300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.390 #45 NEW cov: 11756 ft: 14262 corp: 33/138b lim: 10 exec/s: 45 rss: 70Mb L: 10/10 MS: 1 ChangeByte- 00:07:27.390 [2024-11-17 23:07:23.922209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab3f cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.922238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.922357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ab6b cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.922375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.390 #46 NEW cov: 11756 ft: 14272 corp: 34/142b lim: 10 exec/s: 46 rss: 70Mb L: 4/10 MS: 1 ChangeByte- 00:07:27.390 [2024-11-17 23:07:23.962480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00003fab cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.962507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.962638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000aba9 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.962656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.390 [2024-11-17 23:07:23.962757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000a9a9 cdw11:00000000 00:07:27.390 [2024-11-17 23:07:23.962774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.390 #47 NEW cov: 11756 ft: 14283 corp: 35/149b lim: 10 exec/s: 47 rss: 70Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:27.649 [2024-11-17 23:07:24.002936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ab00 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.002962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.003073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.003090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.003203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.003219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.003333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.003350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.003460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:000002c1 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.003477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:27.650 #48 NEW cov: 11756 ft: 14308 corp: 36/159b lim: 10 exec/s: 48 rss: 70Mb L: 10/10 MS: 1 ChangeBit- 00:07:27.650 [2024-11-17 23:07:24.042498] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000ff31 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.042523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.042639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.042654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.650 #49 NEW cov: 11756 ft: 14310 corp: 37/164b lim: 10 exec/s: 49 rss: 70Mb L: 5/10 MS: 1 InsertByte- 00:07:27.650 [2024-11-17 23:07:24.082495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002bab cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.082521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 #50 NEW cov: 11756 ft: 14313 corp: 38/167b lim: 10 exec/s: 50 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:27.650 [2024-11-17 23:07:24.112721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002202 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.112746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.112865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00002202 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.112882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.650 #51 NEW cov: 11756 ft: 14322 corp: 39/171b lim: 10 exec/s: 51 rss: 70Mb L: 4/10 MS: 1 CopyPart- 00:07:27.650 [2024-11-17 23:07:24.152672] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00007e02 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.152699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 #57 NEW cov: 11756 ft: 14356 corp: 40/174b lim: 10 exec/s: 57 rss: 70Mb L: 3/10 MS: 1 InsertByte- 00:07:27.650 [2024-11-17 23:07:24.193193] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.193219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.193332] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.193349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:27.650 [2024-11-17 23:07:24.193451] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ff25 cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.193466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:27.650 #58 NEW cov: 11756 ft: 14359 corp: 41/181b lim: 10 exec/s: 58 rss: 70Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:07:27.650 [2024-11-17 23:07:24.232960] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000feff cdw11:00000000 00:07:27.650 [2024-11-17 23:07:24.232989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:27.650 #59 NEW cov: 11756 ft: 14379 corp: 42/183b lim: 10 exec/s: 29 rss: 70Mb L: 2/10 MS: 1 ChangeBinInt- 00:07:27.650 #59 DONE cov: 11756 ft: 14379 corp: 42/183b lim: 10 exec/s: 29 rss: 70Mb 00:07:27.650 Done 59 runs in 2 second(s) 00:07:27.909 23:07:24 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_6.conf 00:07:27.909 23:07:24 -- ../common.sh@72 -- # (( i++ )) 00:07:27.909 23:07:24 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:27.909 23:07:24 -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:07:27.909 23:07:24 -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:07:27.909 23:07:24 -- nvmf/run.sh@24 -- # local timen=1 00:07:27.909 23:07:24 -- nvmf/run.sh@25 -- # local core=0x1 00:07:27.909 23:07:24 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:27.909 23:07:24 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:07:27.909 23:07:24 -- nvmf/run.sh@29 -- # printf %02d 7 00:07:27.909 23:07:24 -- nvmf/run.sh@29 -- # port=4407 00:07:27.909 23:07:24 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:27.909 23:07:24 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:07:27.909 23:07:24 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:27.909 23:07:24 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 -r /var/tmp/spdk7.sock 00:07:27.909 [2024-11-17 23:07:24.418045] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:27.909 [2024-11-17 23:07:24.418136] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299823 ] 00:07:27.909 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.169 [2024-11-17 23:07:24.605392] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.169 [2024-11-17 23:07:24.669538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:28.169 [2024-11-17 23:07:24.669680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.169 [2024-11-17 23:07:24.727521] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.169 [2024-11-17 23:07:24.743817] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:07:28.169 INFO: Running with entropic power schedule (0xFF, 100). 00:07:28.169 INFO: Seed: 4079289720 00:07:28.169 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:28.169 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:28.169 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:07:28.169 INFO: A corpus is not provided, starting from an empty corpus 00:07:28.169 #2 INITED exec/s: 0 rss: 60Mb 00:07:28.169 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:28.169 This may also happen if the target rejected all inputs we tried so far 00:07:28.428 [2024-11-17 23:07:24.788516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.428 [2024-11-17 23:07:24.788553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.428 [2024-11-17 23:07:24.788600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.428 [2024-11-17 23:07:24.788616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.428 [2024-11-17 23:07:24.788648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.428 [2024-11-17 23:07:24.788663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.688 NEW_FUNC[1/669]: 0x445d08 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:07:28.688 NEW_FUNC[2/669]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:28.688 #3 NEW cov: 11529 ft: 11528 corp: 2/8b lim: 10 exec/s: 0 rss: 68Mb L: 7/7 MS: 1 InsertRepeatedBytes- 00:07:28.688 [2024-11-17 23:07:25.109355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.109393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.109425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.109440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.109466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000aff cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.109482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.109509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.109524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.688 #4 NEW cov: 11642 ft: 12238 corp: 3/16b lim: 10 exec/s: 0 rss: 68Mb L: 8/8 MS: 1 CrossOver- 00:07:28.688 [2024-11-17 23:07:25.179410] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.179439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.179484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f51 cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.179499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.179526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.179549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.179576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000af2a cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.179590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.688 #5 NEW cov: 11648 ft: 12506 corp: 4/25b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 CMP- DE: "\000\213\017Q\302e\257*"- 00:07:28.688 [2024-11-17 23:07:25.229494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.229523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.229575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.229591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.229622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.229637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.688 [2024-11-17 23:07:25.229664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.229679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.688 #6 NEW cov: 11733 ft: 12727 corp: 5/33b lim: 10 exec/s: 0 rss: 68Mb L: 8/9 MS: 1 CrossOver- 00:07:28.688 [2024-11-17 23:07:25.299592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00002d0a cdw11:00000000 00:07:28.688 [2024-11-17 23:07:25.299621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.947 #7 NEW cov: 11733 ft: 13167 corp: 6/35b lim: 10 exec/s: 0 rss: 68Mb L: 2/9 MS: 1 InsertByte- 00:07:28.947 [2024-11-17 23:07:25.349774] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.349803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.349848] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.349863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.349889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.349905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.947 #8 NEW cov: 11733 ft: 13271 corp: 7/42b lim: 10 exec/s: 0 rss: 68Mb L: 7/9 MS: 1 ChangeBinInt- 00:07:28.947 [2024-11-17 23:07:25.399865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.399894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.399939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.399954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.947 #9 NEW cov: 11733 ft: 13463 corp: 8/47b lim: 10 exec/s: 0 rss: 68Mb L: 5/9 MS: 1 EraseBytes- 00:07:28.947 [2024-11-17 23:07:25.450137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab2 cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.450167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.450197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.450212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.450239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.450254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.450280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.450294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.947 #10 NEW cov: 11733 ft: 13580 corp: 9/56b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:07:28.947 [2024-11-17 23:07:25.500253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:28.947 [2024-11-17 23:07:25.500283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:28.947 [2024-11-17 23:07:25.500313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f51 cdw11:00000000 00:07:28.948 [2024-11-17 23:07:25.500328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:28.948 [2024-11-17 23:07:25.500354] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:28.948 [2024-11-17 23:07:25.500369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:28.948 [2024-11-17 23:07:25.500395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000af2a cdw11:00000000 00:07:28.948 [2024-11-17 23:07:25.500410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:28.948 #11 NEW cov: 11733 ft: 13616 corp: 10/65b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ShuffleBytes- 00:07:29.206 [2024-11-17 23:07:25.560374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.560404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.206 [2024-11-17 23:07:25.560435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.560452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.206 [2024-11-17 23:07:25.560479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fdff cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.560494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.206 #12 NEW cov: 11733 ft: 13675 corp: 11/72b lim: 10 exec/s: 0 rss: 68Mb L: 7/9 MS: 1 ChangeBit- 00:07:29.206 [2024-11-17 23:07:25.610504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.610538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.206 [2024-11-17 23:07:25.610584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f51 cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.610599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.206 [2024-11-17 23:07:25.610626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.610641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.206 [2024-11-17 23:07:25.610667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000af2a cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.610681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.206 #14 NEW cov: 11733 ft: 13759 corp: 12/81b lim: 10 exec/s: 0 rss: 68Mb L: 9/9 MS: 2 EraseBytes-PersAutoDict- DE: "\000\213\017Q\302e\257*"- 00:07:29.206 [2024-11-17 23:07:25.670596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.206 [2024-11-17 23:07:25.670626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.206 [2024-11-17 23:07:25.670688] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff12 cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.670704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.207 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:29.207 #15 NEW cov: 11756 ft: 13839 corp: 13/85b lim: 10 exec/s: 0 rss: 69Mb L: 4/9 MS: 1 EraseBytes- 00:07:29.207 [2024-11-17 23:07:25.740902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.740933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.207 [2024-11-17 23:07:25.740963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.740979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.207 [2024-11-17 23:07:25.741005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000fdff cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.741021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.207 [2024-11-17 23:07:25.741048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.741063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.207 [2024-11-17 23:07:25.741089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000120a cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.741103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:29.207 #16 NEW cov: 11756 ft: 13925 corp: 14/95b lim: 10 exec/s: 16 rss: 69Mb L: 10/10 MS: 1 CrossOver- 00:07:29.207 [2024-11-17 23:07:25.810975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff51 cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.811003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.207 [2024-11-17 23:07:25.811047] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.811062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.207 [2024-11-17 23:07:25.811089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000af2a cdw11:00000000 00:07:29.207 [2024-11-17 23:07:25.811104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.466 #18 NEW cov: 11756 ft: 13970 corp: 15/101b lim: 10 exec/s: 18 rss: 69Mb L: 6/10 MS: 2 ChangeByte-CrossOver- 00:07:29.466 [2024-11-17 23:07:25.861134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.861162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:25.861206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.861221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:25.861248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000bdff cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.861263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.466 #19 NEW cov: 11756 ft: 13991 corp: 16/108b lim: 10 exec/s: 19 rss: 69Mb L: 7/10 MS: 1 ChangeBit- 00:07:29.466 [2024-11-17 23:07:25.911144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.911172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.466 #20 NEW cov: 11756 ft: 14003 corp: 17/110b lim: 10 exec/s: 20 rss: 69Mb L: 2/10 MS: 1 CrossOver- 00:07:29.466 [2024-11-17 23:07:25.971426] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.971454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:25.971499] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.971514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:25.971548] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ff7f cdw11:00000000 00:07:29.466 [2024-11-17 23:07:25.971563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.466 #21 NEW cov: 11756 ft: 14025 corp: 18/117b lim: 10 exec/s: 21 rss: 69Mb L: 7/10 MS: 1 ChangeBit- 00:07:29.466 [2024-11-17 23:07:26.021638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ff51 cdw11:00000000 00:07:29.466 [2024-11-17 23:07:26.021667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:26.021710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:29.466 [2024-11-17 23:07:26.021726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:26.021752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:29.466 [2024-11-17 23:07:26.021767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.466 [2024-11-17 23:07:26.021793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b2af cdw11:00000000 00:07:29.466 [2024-11-17 23:07:26.021807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.466 #22 NEW cov: 11756 ft: 14107 corp: 19/126b lim: 10 exec/s: 22 rss: 69Mb L: 9/10 MS: 1 CrossOver- 00:07:29.725 [2024-11-17 23:07:26.081708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.725 [2024-11-17 23:07:26.081738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.725 [2024-11-17 23:07:26.081769] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ff12 cdw11:00000000 00:07:29.725 [2024-11-17 23:07:26.081785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.725 #23 NEW cov: 11756 ft: 14167 corp: 20/130b lim: 10 exec/s: 23 rss: 69Mb L: 4/10 MS: 1 CopyPart- 00:07:29.725 [2024-11-17 23:07:26.151914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:29.725 [2024-11-17 23:07:26.151942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.725 [2024-11-17 23:07:26.151987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f51 cdw11:00000000 00:07:29.725 [2024-11-17 23:07:26.152005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.725 [2024-11-17 23:07:26.152032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:29.725 [2024-11-17 23:07:26.152047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.725 [2024-11-17 23:07:26.152073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000af2a cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.152088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.726 #24 NEW cov: 11756 ft: 14179 corp: 21/139b lim: 10 exec/s: 24 rss: 69Mb L: 9/10 MS: 1 ShuffleBytes- 00:07:29.726 [2024-11-17 23:07:26.212009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.212038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.726 [2024-11-17 23:07:26.212068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000e7ff cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.212083] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.726 #25 NEW cov: 11756 ft: 14202 corp: 22/144b lim: 10 exec/s: 25 rss: 69Mb L: 5/10 MS: 1 InsertByte- 00:07:29.726 [2024-11-17 23:07:26.282269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.282297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.726 [2024-11-17 23:07:26.282341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.282356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.726 [2024-11-17 23:07:26.282383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.282398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.726 [2024-11-17 23:07:26.282424] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.282438] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.726 #26 NEW cov: 11756 ft: 14209 corp: 23/153b lim: 10 exec/s: 26 rss: 69Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:29.726 [2024-11-17 23:07:26.332263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.726 [2024-11-17 23:07:26.332291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.985 #27 NEW cov: 11756 ft: 14280 corp: 24/156b lim: 10 exec/s: 27 rss: 69Mb L: 3/10 MS: 1 CrossOver- 00:07:29.985 [2024-11-17 23:07:26.382610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.382641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.382671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.382686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.382712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.382731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.382757] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00000aff cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.382772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.382798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.382812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:29.985 #28 NEW cov: 11756 ft: 14292 corp: 25/166b lim: 10 exec/s: 28 rss: 69Mb L: 10/10 MS: 1 CrossOver- 00:07:29.985 [2024-11-17 23:07:26.432669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.432698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.432742] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f51 cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.432758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.432784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c265 cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.432799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.432825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000af2a cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.432840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.985 #30 NEW cov: 11756 ft: 14300 corp: 26/175b lim: 10 exec/s: 30 rss: 69Mb L: 9/10 MS: 2 ChangeByte-PersAutoDict- DE: "\000\213\017Q\302e\257*"- 00:07:29.985 [2024-11-17 23:07:26.482839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000ab2 cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.482868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.482897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b1b2 cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.482911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.482936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.482950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.985 [2024-11-17 23:07:26.482976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:07:29.985 [2024-11-17 23:07:26.482990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.985 #31 NEW cov: 11756 ft: 14363 corp: 27/184b lim: 10 exec/s: 31 rss: 69Mb L: 9/10 MS: 1 ChangeBinInt- 00:07:29.986 [2024-11-17 23:07:26.553040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000023 cdw11:00000000 00:07:29.986 [2024-11-17 23:07:26.553069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:29.986 [2024-11-17 23:07:26.553114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008b0f cdw11:00000000 00:07:29.986 [2024-11-17 23:07:26.553130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:29.986 [2024-11-17 23:07:26.553166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:000051c2 cdw11:00000000 00:07:29.986 [2024-11-17 23:07:26.553180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:29.986 [2024-11-17 23:07:26.553207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000065af cdw11:00000000 00:07:29.986 [2024-11-17 23:07:26.553222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:29.986 [2024-11-17 23:07:26.553248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002a10 cdw11:00000000 00:07:29.986 [2024-11-17 23:07:26.553263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.245 #32 NEW cov: 11756 ft: 14365 corp: 28/194b lim: 10 exec/s: 32 rss: 69Mb L: 10/10 MS: 1 InsertByte- 00:07:30.245 [2024-11-17 23:07:26.623068] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000010a cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.623098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.245 #33 NEW cov: 11756 ft: 14400 corp: 29/196b lim: 10 exec/s: 33 rss: 69Mb L: 2/10 MS: 1 InsertByte- 00:07:30.245 [2024-11-17 23:07:26.673372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.673402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.673431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.673446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.673472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.673487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.673513] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000f609 cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.673528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.673562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.673576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.245 #34 NEW cov: 11756 ft: 14414 corp: 30/206b lim: 10 exec/s: 34 rss: 70Mb L: 10/10 MS: 1 ChangeBinInt- 00:07:30.245 [2024-11-17 23:07:26.733493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000008b cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.733521] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.733558] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000f51 cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.733572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.733614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000c2ff cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.733628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.733660] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:000065af cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.733674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:30.245 [2024-11-17 23:07:26.733700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:8 nsid:0 cdw10:00002a0a cdw11:00000000 00:07:30.245 [2024-11-17 23:07:26.733715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:30.245 #35 NEW cov: 11756 ft: 14431 corp: 31/216b lim: 10 exec/s: 35 rss: 70Mb L: 10/10 MS: 1 InsertByte- 00:07:30.245 [2024-11-17 23:07:26.783453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:0000ffff cdw11:00000000 00:07:30.246 [2024-11-17 23:07:26.783482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:30.246 #36 NEW cov: 11756 ft: 14441 corp: 32/218b lim: 10 exec/s: 18 rss: 70Mb L: 2/10 MS: 1 EraseBytes- 00:07:30.246 #36 DONE cov: 11756 ft: 14441 corp: 32/218b lim: 10 exec/s: 18 rss: 70Mb 00:07:30.246 ###### Recommended dictionary. ###### 00:07:30.246 "\000\213\017Q\302e\257*" # Uses: 2 00:07:30.246 ###### End of recommended dictionary. ###### 00:07:30.246 Done 36 runs in 2 second(s) 00:07:30.505 23:07:26 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_7.conf 00:07:30.505 23:07:26 -- ../common.sh@72 -- # (( i++ )) 00:07:30.505 23:07:26 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:30.505 23:07:26 -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:07:30.505 23:07:26 -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:07:30.505 23:07:26 -- nvmf/run.sh@24 -- # local timen=1 00:07:30.505 23:07:26 -- nvmf/run.sh@25 -- # local core=0x1 00:07:30.505 23:07:26 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:30.505 23:07:26 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:07:30.505 23:07:26 -- nvmf/run.sh@29 -- # printf %02d 8 00:07:30.505 23:07:26 -- nvmf/run.sh@29 -- # port=4408 00:07:30.505 23:07:26 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:30.505 23:07:26 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:07:30.505 23:07:26 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:30.505 23:07:26 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 -r /var/tmp/spdk8.sock 00:07:30.505 [2024-11-17 23:07:26.994395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.505 [2024-11-17 23:07:26.994479] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300359 ] 00:07:30.505 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.765 [2024-11-17 23:07:27.181958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.765 [2024-11-17 23:07:27.244612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:30.765 [2024-11-17 23:07:27.244739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.765 [2024-11-17 23:07:27.302705] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.765 [2024-11-17 23:07:27.319044] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:07:30.765 INFO: Running with entropic power schedule (0xFF, 100). 00:07:30.765 INFO: Seed: 2356384406 00:07:30.765 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:30.765 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:30.765 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:07:30.765 INFO: A corpus is not provided, starting from an empty corpus 00:07:30.765 [2024-11-17 23:07:27.364758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:30.765 [2024-11-17 23:07:27.364785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.023 #2 INITED cov: 11557 ft: 11558 corp: 1/1b exec/s: 0 rss: 65Mb 00:07:31.023 [2024-11-17 23:07:27.394693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.394718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.024 #3 NEW cov: 11670 ft: 11973 corp: 2/2b lim: 5 exec/s: 0 rss: 66Mb L: 1/1 MS: 1 ShuffleBytes- 00:07:31.024 [2024-11-17 23:07:27.434813] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.434837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.024 #4 NEW cov: 11676 ft: 12308 corp: 3/3b lim: 5 exec/s: 0 rss: 66Mb L: 1/1 MS: 1 CrossOver- 00:07:31.024 [2024-11-17 23:07:27.474937] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.474962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.024 #5 NEW cov: 11761 ft: 12612 corp: 4/4b lim: 5 exec/s: 0 rss: 66Mb L: 1/1 MS: 1 ChangeByte- 00:07:31.024 [2024-11-17 23:07:27.515063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.515088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.024 #6 NEW cov: 11761 ft: 12719 corp: 5/5b lim: 5 exec/s: 0 rss: 66Mb L: 1/1 MS: 1 ShuffleBytes- 00:07:31.024 [2024-11-17 23:07:27.555137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.555162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.024 #7 NEW cov: 11761 ft: 12789 corp: 6/6b lim: 5 exec/s: 0 rss: 66Mb L: 1/1 MS: 1 ChangeBit- 00:07:31.024 [2024-11-17 23:07:27.595720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.595746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.024 [2024-11-17 23:07:27.595802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.595816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.024 [2024-11-17 23:07:27.595869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.595882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.024 [2024-11-17 23:07:27.595935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.595947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.024 #8 NEW cov: 11761 ft: 13680 corp: 7/10b lim: 5 exec/s: 0 rss: 66Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:31.024 [2024-11-17 23:07:27.635365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.024 [2024-11-17 23:07:27.635392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.283 #9 NEW cov: 11761 ft: 13725 corp: 8/11b lim: 5 exec/s: 0 rss: 66Mb L: 1/4 MS: 1 ChangeBit- 00:07:31.283 [2024-11-17 23:07:27.675976] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.676001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.676072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.676086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.676142] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.676155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.676210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.676224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.283 #10 NEW cov: 11761 ft: 13756 corp: 9/15b lim: 5 exec/s: 0 rss: 66Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:07:31.283 [2024-11-17 23:07:27.715643] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.715668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.283 #11 NEW cov: 11761 ft: 13792 corp: 10/16b lim: 5 exec/s: 0 rss: 66Mb L: 1/4 MS: 1 ChangeByte- 00:07:31.283 [2024-11-17 23:07:27.756364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.756388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.756444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.756458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.756512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.756524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.756583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.756595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.756650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.756666] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.283 #12 NEW cov: 11761 ft: 13878 corp: 11/21b lim: 5 exec/s: 0 rss: 66Mb L: 5/5 MS: 1 InsertByte- 00:07:31.283 [2024-11-17 23:07:27.806488] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.806512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.806582] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.806596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.806651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.806664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.806717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.806730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.806783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.806797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.283 #13 NEW cov: 11761 ft: 13913 corp: 12/26b lim: 5 exec/s: 0 rss: 66Mb L: 5/5 MS: 1 InsertByte- 00:07:31.283 [2024-11-17 23:07:27.846588] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.846613] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.846694] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.846709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.846765] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.846777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.846831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.846844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.283 [2024-11-17 23:07:27.846898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.283 [2024-11-17 23:07:27.846911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.283 #14 NEW cov: 11761 ft: 13934 corp: 13/31b lim: 5 exec/s: 0 rss: 66Mb L: 5/5 MS: 1 CMP- DE: "\377\377\377\377"- 00:07:31.284 [2024-11-17 23:07:27.886564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.284 [2024-11-17 23:07:27.886588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.284 [2024-11-17 23:07:27.886658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.284 [2024-11-17 23:07:27.886672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.284 [2024-11-17 23:07:27.886737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.284 [2024-11-17 23:07:27.886750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.284 [2024-11-17 23:07:27.886777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.284 [2024-11-17 23:07:27.886789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.543 #15 NEW cov: 11761 ft: 13971 corp: 14/35b lim: 5 exec/s: 0 rss: 66Mb L: 4/5 MS: 1 CopyPart- 00:07:31.543 [2024-11-17 23:07:27.926423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.926448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:27.926520] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.926539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.543 #16 NEW cov: 11761 ft: 14187 corp: 15/37b lim: 5 exec/s: 0 rss: 66Mb L: 2/5 MS: 1 InsertByte- 00:07:31.543 [2024-11-17 23:07:27.966945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.966970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:27.967027] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.967040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:27.967094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.967108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:27.967164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.967177] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:27.967233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:27.967246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.543 #17 NEW cov: 11761 ft: 14196 corp: 16/42b lim: 5 exec/s: 0 rss: 66Mb L: 5/5 MS: 1 ChangeBinInt- 00:07:31.543 [2024-11-17 23:07:28.007103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.007129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.007199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.007213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.007271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.007284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.007337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.007351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.007405] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.007418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.543 #18 NEW cov: 11761 ft: 14254 corp: 17/47b lim: 5 exec/s: 0 rss: 66Mb L: 5/5 MS: 1 ChangeByte- 00:07:31.543 [2024-11-17 23:07:28.046886] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.046912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.046966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.046980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.047035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.047049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.543 #19 NEW cov: 11761 ft: 14491 corp: 18/50b lim: 5 exec/s: 0 rss: 67Mb L: 3/5 MS: 1 EraseBytes- 00:07:31.543 [2024-11-17 23:07:28.096777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.096802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.543 #20 NEW cov: 11761 ft: 14510 corp: 19/51b lim: 5 exec/s: 0 rss: 67Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:31.543 [2024-11-17 23:07:28.137034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.137059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.543 [2024-11-17 23:07:28.137114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.543 [2024-11-17 23:07:28.137130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.803 #21 NEW cov: 11761 ft: 14572 corp: 20/53b lim: 5 exec/s: 0 rss: 67Mb L: 2/5 MS: 1 CrossOver- 00:07:31.803 [2024-11-17 23:07:28.177550] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.177576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.803 [2024-11-17 23:07:28.177631] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.177644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.803 [2024-11-17 23:07:28.177697] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.177710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:31.803 [2024-11-17 23:07:28.177762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.177774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:31.803 [2024-11-17 23:07:28.177829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.177842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:31.803 #22 NEW cov: 11761 ft: 14610 corp: 21/58b lim: 5 exec/s: 0 rss: 67Mb L: 5/5 MS: 1 CrossOver- 00:07:31.803 [2024-11-17 23:07:28.217202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.217227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:31.803 [2024-11-17 23:07:28.217297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.217310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:31.803 #23 NEW cov: 11761 ft: 14632 corp: 22/60b lim: 5 exec/s: 0 rss: 67Mb L: 2/5 MS: 1 CrossOver- 00:07:31.803 [2024-11-17 23:07:28.257200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:31.803 [2024-11-17 23:07:28.257225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.063 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:32.063 #24 NEW cov: 11784 ft: 14695 corp: 23/61b lim: 5 exec/s: 24 rss: 68Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:32.063 [2024-11-17 23:07:28.548526] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.548563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.548632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.548646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.548704] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.548717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.548771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.548783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.548837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.548850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.588598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.588624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.588693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.588707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.588761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.588774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.588828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.588841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.588892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.588905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.063 #26 NEW cov: 11784 ft: 14708 corp: 24/66b lim: 5 exec/s: 26 rss: 68Mb L: 5/5 MS: 2 PersAutoDict-ShuffleBytes- DE: "\377\377\377\377"- 00:07:32.063 [2024-11-17 23:07:28.628135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.628160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.063 #27 NEW cov: 11784 ft: 14724 corp: 25/67b lim: 5 exec/s: 27 rss: 68Mb L: 1/5 MS: 1 CrossOver- 00:07:32.063 [2024-11-17 23:07:28.668833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.668858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.668926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.668940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.668996] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.669009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.669064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.669077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.063 [2024-11-17 23:07:28.669129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.063 [2024-11-17 23:07:28.669143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.324 #28 NEW cov: 11784 ft: 14746 corp: 26/72b lim: 5 exec/s: 28 rss: 68Mb L: 5/5 MS: 1 ChangeBit- 00:07:32.324 [2024-11-17 23:07:28.718992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.719017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.719071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.719084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.719137] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.719150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.719200] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.719212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.719266] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.719278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.324 #29 NEW cov: 11784 ft: 14758 corp: 27/77b lim: 5 exec/s: 29 rss: 68Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:32.324 [2024-11-17 23:07:28.758964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.758988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.759058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.759071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.759124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.759137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.759192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.759206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.324 #30 NEW cov: 11784 ft: 14765 corp: 28/81b lim: 5 exec/s: 30 rss: 68Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:32.324 [2024-11-17 23:07:28.799204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.799227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.799283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.799296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.799347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.799377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.799430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.799443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.799494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.799508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.324 #31 NEW cov: 11784 ft: 14774 corp: 29/86b lim: 5 exec/s: 31 rss: 68Mb L: 5/5 MS: 1 CMP- DE: "\000\002\000\000"- 00:07:32.324 [2024-11-17 23:07:28.848771] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.848795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.324 #32 NEW cov: 11784 ft: 14785 corp: 30/87b lim: 5 exec/s: 32 rss: 68Mb L: 1/5 MS: 1 ChangeByte- 00:07:32.324 [2024-11-17 23:07:28.888847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.888871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.324 #33 NEW cov: 11784 ft: 14794 corp: 31/88b lim: 5 exec/s: 33 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:32.324 [2024-11-17 23:07:28.929124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.929148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.324 [2024-11-17 23:07:28.929201] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.324 [2024-11-17 23:07:28.929214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.584 #34 NEW cov: 11784 ft: 14815 corp: 32/90b lim: 5 exec/s: 34 rss: 69Mb L: 2/5 MS: 1 InsertByte- 00:07:32.584 [2024-11-17 23:07:28.969729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.584 [2024-11-17 23:07:28.969754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.584 [2024-11-17 23:07:28.969805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.584 [2024-11-17 23:07:28.969818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.584 [2024-11-17 23:07:28.969871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:28.969884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:28.969938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:28.969950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:28.970002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:28.970015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.585 #35 NEW cov: 11784 ft: 14907 corp: 33/95b lim: 5 exec/s: 35 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:07:32.585 [2024-11-17 23:07:29.019879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.019904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.019958] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.019972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.020023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.020037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.020065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.020079] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.020135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.020147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.585 #36 NEW cov: 11784 ft: 14925 corp: 34/100b lim: 5 exec/s: 36 rss: 69Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:32.585 [2024-11-17 23:07:29.059502] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.059526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.059601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.059618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.585 #37 NEW cov: 11784 ft: 14943 corp: 35/102b lim: 5 exec/s: 37 rss: 69Mb L: 2/5 MS: 1 EraseBytes- 00:07:32.585 [2024-11-17 23:07:29.099919] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.099944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.099997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.100011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.100064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.100077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.100128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.100141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.585 #38 NEW cov: 11784 ft: 14953 corp: 36/106b lim: 5 exec/s: 38 rss: 69Mb L: 4/5 MS: 1 ChangeByte- 00:07:32.585 [2024-11-17 23:07:29.139592] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.139617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.585 #39 NEW cov: 11784 ft: 14964 corp: 37/107b lim: 5 exec/s: 39 rss: 69Mb L: 1/5 MS: 1 ChangeBit- 00:07:32.585 [2024-11-17 23:07:29.180319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.180344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.180401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.180414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.180465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.180479] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.180537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.180550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:32.585 [2024-11-17 23:07:29.180605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000004 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.585 [2024-11-17 23:07:29.180618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:32.843 #40 NEW cov: 11784 ft: 14976 corp: 38/112b lim: 5 exec/s: 40 rss: 69Mb L: 5/5 MS: 1 ShuffleBytes- 00:07:32.843 [2024-11-17 23:07:29.229988] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.230012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.844 [2024-11-17 23:07:29.230067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.230080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.844 #41 NEW cov: 11784 ft: 14994 corp: 39/114b lim: 5 exec/s: 41 rss: 69Mb L: 2/5 MS: 1 ChangeByte- 00:07:32.844 [2024-11-17 23:07:29.270234] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.270258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.844 [2024-11-17 23:07:29.270313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.270326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.844 [2024-11-17 23:07:29.270391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.270405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.844 #42 NEW cov: 11784 ft: 14995 corp: 40/117b lim: 5 exec/s: 42 rss: 69Mb L: 3/5 MS: 1 CrossOver- 00:07:32.844 [2024-11-17 23:07:29.310361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.310385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.844 [2024-11-17 23:07:29.310440] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.310453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.844 [2024-11-17 23:07:29.310504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.310539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:32.844 #43 NEW cov: 11784 ft: 15005 corp: 41/120b lim: 5 exec/s: 43 rss: 69Mb L: 3/5 MS: 1 CrossOver- 00:07:32.844 [2024-11-17 23:07:29.350320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.350344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:32.844 [2024-11-17 23:07:29.350396] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:32.844 [2024-11-17 23:07:29.350410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:32.844 #44 NEW cov: 11784 ft: 15009 corp: 42/122b lim: 5 exec/s: 22 rss: 69Mb L: 2/5 MS: 1 CopyPart- 00:07:32.844 #44 DONE cov: 11784 ft: 15009 corp: 42/122b lim: 5 exec/s: 22 rss: 69Mb 00:07:32.844 ###### Recommended dictionary. ###### 00:07:32.844 "\377\377\377\377" # Uses: 1 00:07:32.844 "\000\002\000\000" # Uses: 0 00:07:32.844 ###### End of recommended dictionary. ###### 00:07:32.844 Done 44 runs in 2 second(s) 00:07:33.103 23:07:29 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_8.conf 00:07:33.103 23:07:29 -- ../common.sh@72 -- # (( i++ )) 00:07:33.103 23:07:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:33.103 23:07:29 -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:07:33.103 23:07:29 -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:07:33.103 23:07:29 -- nvmf/run.sh@24 -- # local timen=1 00:07:33.103 23:07:29 -- nvmf/run.sh@25 -- # local core=0x1 00:07:33.103 23:07:29 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:33.103 23:07:29 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:07:33.103 23:07:29 -- nvmf/run.sh@29 -- # printf %02d 9 00:07:33.103 23:07:29 -- nvmf/run.sh@29 -- # port=4409 00:07:33.103 23:07:29 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:33.103 23:07:29 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:07:33.103 23:07:29 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:33.103 23:07:29 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 -r /var/tmp/spdk9.sock 00:07:33.103 [2024-11-17 23:07:29.531627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:33.103 [2024-11-17 23:07:29.531730] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300661 ] 00:07:33.103 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.103 [2024-11-17 23:07:29.715406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.362 [2024-11-17 23:07:29.780092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.362 [2024-11-17 23:07:29.780220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.362 [2024-11-17 23:07:29.838460] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.362 [2024-11-17 23:07:29.854789] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:07:33.362 INFO: Running with entropic power schedule (0xFF, 100). 00:07:33.362 INFO: Seed: 600373697 00:07:33.362 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:33.362 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:33.362 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:07:33.362 INFO: A corpus is not provided, starting from an empty corpus 00:07:33.362 [2024-11-17 23:07:29.920825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.362 [2024-11-17 23:07:29.920860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.362 #2 INITED cov: 11549 ft: 11558 corp: 1/1b exec/s: 0 rss: 66Mb 00:07:33.362 [2024-11-17 23:07:29.961090] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.362 [2024-11-17 23:07:29.961118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.362 [2024-11-17 23:07:29.961245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.362 [2024-11-17 23:07:29.961262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.879 NEW_FUNC[1/1]: 0x1277658 in nvmf_transport_poll_group_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/transport.c:723 00:07:33.879 #3 NEW cov: 11670 ft: 12855 corp: 2/3b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 InsertByte- 00:07:33.879 [2024-11-17 23:07:30.292260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.292297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.879 [2024-11-17 23:07:30.292428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.292445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.879 #4 NEW cov: 11676 ft: 13057 corp: 3/5b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:33.879 [2024-11-17 23:07:30.342071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.342102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.879 [2024-11-17 23:07:30.342231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.342249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.879 #5 NEW cov: 11761 ft: 13391 corp: 4/7b lim: 5 exec/s: 0 rss: 68Mb L: 2/2 MS: 1 ShuffleBytes- 00:07:33.879 [2024-11-17 23:07:30.392750] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.392779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.879 [2024-11-17 23:07:30.392929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.392946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:33.879 [2024-11-17 23:07:30.393074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.393090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:33.879 #6 NEW cov: 11761 ft: 13717 corp: 5/10b lim: 5 exec/s: 0 rss: 68Mb L: 3/3 MS: 1 CrossOver- 00:07:33.879 [2024-11-17 23:07:30.442296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:33.879 [2024-11-17 23:07:30.442324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:33.879 #7 NEW cov: 11761 ft: 13906 corp: 6/11b lim: 5 exec/s: 0 rss: 68Mb L: 1/3 MS: 1 ShuffleBytes- 00:07:34.138 [2024-11-17 23:07:30.492500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.492529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.138 #8 NEW cov: 11761 ft: 13987 corp: 7/12b lim: 5 exec/s: 0 rss: 68Mb L: 1/3 MS: 1 ChangeBit- 00:07:34.138 [2024-11-17 23:07:30.542536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.542577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.138 #9 NEW cov: 11761 ft: 14042 corp: 8/13b lim: 5 exec/s: 0 rss: 68Mb L: 1/3 MS: 1 ChangeBit- 00:07:34.138 [2024-11-17 23:07:30.593964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.593990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.138 [2024-11-17 23:07:30.594119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.594136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.138 [2024-11-17 23:07:30.594271] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.594289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.138 [2024-11-17 23:07:30.594414] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.594431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.138 [2024-11-17 23:07:30.594575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.138 [2024-11-17 23:07:30.594592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.138 #10 NEW cov: 11761 ft: 14400 corp: 9/18b lim: 5 exec/s: 0 rss: 68Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:07:34.139 [2024-11-17 23:07:30.653857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.139 [2024-11-17 23:07:30.653885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.139 [2024-11-17 23:07:30.654026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.139 [2024-11-17 23:07:30.654044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.139 [2024-11-17 23:07:30.654180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.139 [2024-11-17 23:07:30.654199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.139 [2024-11-17 23:07:30.654330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.139 [2024-11-17 23:07:30.654348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.139 #11 NEW cov: 11761 ft: 14439 corp: 10/22b lim: 5 exec/s: 0 rss: 68Mb L: 4/5 MS: 1 InsertByte- 00:07:34.139 [2024-11-17 23:07:30.713380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.139 [2024-11-17 23:07:30.713405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.139 [2024-11-17 23:07:30.713544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.139 [2024-11-17 23:07:30.713564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.139 #12 NEW cov: 11761 ft: 14465 corp: 11/24b lim: 5 exec/s: 0 rss: 69Mb L: 2/5 MS: 1 ChangeBit- 00:07:34.398 [2024-11-17 23:07:30.763361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.763389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.398 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:34.398 #13 NEW cov: 11784 ft: 14480 corp: 12/25b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:34.398 [2024-11-17 23:07:30.813517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.813548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.398 #14 NEW cov: 11784 ft: 14534 corp: 13/26b lim: 5 exec/s: 0 rss: 69Mb L: 1/5 MS: 1 ChangeByte- 00:07:34.398 [2024-11-17 23:07:30.864815] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.864841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.398 [2024-11-17 23:07:30.864974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.864990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.398 [2024-11-17 23:07:30.865129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000d cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.865147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.398 [2024-11-17 23:07:30.865285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.865303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.398 [2024-11-17 23:07:30.865444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.865460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.398 #15 NEW cov: 11784 ft: 14569 corp: 14/31b lim: 5 exec/s: 0 rss: 69Mb L: 5/5 MS: 1 ChangeByte- 00:07:34.398 [2024-11-17 23:07:30.923898] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.923924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.398 #16 NEW cov: 11784 ft: 14587 corp: 15/32b lim: 5 exec/s: 16 rss: 69Mb L: 1/5 MS: 1 ChangeBinInt- 00:07:34.398 [2024-11-17 23:07:30.973935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.398 [2024-11-17 23:07:30.973961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.398 #17 NEW cov: 11784 ft: 14595 corp: 16/33b lim: 5 exec/s: 17 rss: 69Mb L: 1/5 MS: 1 CopyPart- 00:07:34.657 [2024-11-17 23:07:31.024129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.024156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.657 #18 NEW cov: 11784 ft: 14623 corp: 17/34b lim: 5 exec/s: 18 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:34.657 [2024-11-17 23:07:31.074276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.074301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.657 #19 NEW cov: 11784 ft: 14637 corp: 18/35b lim: 5 exec/s: 19 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:34.657 [2024-11-17 23:07:31.124431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.124458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.657 #20 NEW cov: 11784 ft: 14651 corp: 19/36b lim: 5 exec/s: 20 rss: 69Mb L: 1/5 MS: 1 CopyPart- 00:07:34.657 [2024-11-17 23:07:31.184734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.184763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.657 #21 NEW cov: 11784 ft: 14667 corp: 20/37b lim: 5 exec/s: 21 rss: 69Mb L: 1/5 MS: 1 ChangeByte- 00:07:34.657 [2024-11-17 23:07:31.245495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.245522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.657 [2024-11-17 23:07:31.245680] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.245699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.657 [2024-11-17 23:07:31.245835] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.657 [2024-11-17 23:07:31.245853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.917 #22 NEW cov: 11784 ft: 14674 corp: 21/40b lim: 5 exec/s: 22 rss: 69Mb L: 3/5 MS: 1 InsertByte- 00:07:34.917 [2024-11-17 23:07:31.306245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.306272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.306413] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.306430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.306564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.306583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.306723] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.306743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.306879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.306896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:34.917 #23 NEW cov: 11784 ft: 14694 corp: 22/45b lim: 5 exec/s: 23 rss: 69Mb L: 5/5 MS: 1 ChangeBit- 00:07:34.917 [2024-11-17 23:07:31.355121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.355147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.917 #24 NEW cov: 11784 ft: 14759 corp: 23/46b lim: 5 exec/s: 24 rss: 69Mb L: 1/5 MS: 1 ChangeByte- 00:07:34.917 [2024-11-17 23:07:31.405687] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.405714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.405845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000009 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.405864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.917 #25 NEW cov: 11784 ft: 14766 corp: 24/48b lim: 5 exec/s: 25 rss: 69Mb L: 2/5 MS: 1 InsertByte- 00:07:34.917 [2024-11-17 23:07:31.465602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.465632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.917 #26 NEW cov: 11784 ft: 14790 corp: 25/49b lim: 5 exec/s: 26 rss: 69Mb L: 1/5 MS: 1 ChangeBit- 00:07:34.917 [2024-11-17 23:07:31.516350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.516376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.516507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.516524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:34.917 [2024-11-17 23:07:31.516653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:34.917 [2024-11-17 23:07:31.516670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.176 #27 NEW cov: 11784 ft: 14800 corp: 26/52b lim: 5 exec/s: 27 rss: 69Mb L: 3/5 MS: 1 CopyPart- 00:07:35.176 [2024-11-17 23:07:31.576822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.576849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.176 [2024-11-17 23:07:31.576970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.576990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.176 [2024-11-17 23:07:31.577108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.577123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.176 [2024-11-17 23:07:31.577251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.577268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.176 #28 NEW cov: 11784 ft: 14842 corp: 27/56b lim: 5 exec/s: 28 rss: 69Mb L: 4/5 MS: 1 InsertRepeatedBytes- 00:07:35.176 [2024-11-17 23:07:31.626053] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.626082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.176 #29 NEW cov: 11784 ft: 14853 corp: 28/57b lim: 5 exec/s: 29 rss: 69Mb L: 1/5 MS: 1 ShuffleBytes- 00:07:35.176 [2024-11-17 23:07:31.676500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.676528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.176 [2024-11-17 23:07:31.676656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.176 [2024-11-17 23:07:31.676674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.176 #30 NEW cov: 11784 ft: 14861 corp: 29/59b lim: 5 exec/s: 30 rss: 69Mb L: 2/5 MS: 1 ChangeByte- 00:07:35.176 [2024-11-17 23:07:31.726800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.177 [2024-11-17 23:07:31.726827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.177 [2024-11-17 23:07:31.726949] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.177 [2024-11-17 23:07:31.726968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.177 #31 NEW cov: 11784 ft: 14866 corp: 30/61b lim: 5 exec/s: 31 rss: 70Mb L: 2/5 MS: 1 CrossOver- 00:07:35.177 [2024-11-17 23:07:31.786955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.177 [2024-11-17 23:07:31.786982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.177 [2024-11-17 23:07:31.787107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.177 [2024-11-17 23:07:31.787124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.436 #32 NEW cov: 11784 ft: 14905 corp: 31/63b lim: 5 exec/s: 32 rss: 70Mb L: 2/5 MS: 1 ChangeBinInt- 00:07:35.436 [2024-11-17 23:07:31.837667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.436 [2024-11-17 23:07:31.837701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.436 [2024-11-17 23:07:31.837830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.436 [2024-11-17 23:07:31.837846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.436 [2024-11-17 23:07:31.837979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.436 [2024-11-17 23:07:31.837997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.436 [2024-11-17 23:07:31.838123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.436 [2024-11-17 23:07:31.838140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:35.436 #33 NEW cov: 11784 ft: 14915 corp: 32/67b lim: 5 exec/s: 33 rss: 70Mb L: 4/5 MS: 1 ChangeByte- 00:07:35.436 [2024-11-17 23:07:31.897126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.436 [2024-11-17 23:07:31.897153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.437 [2024-11-17 23:07:31.897282] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:35.437 [2024-11-17 23:07:31.897315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.437 #34 NEW cov: 11784 ft: 14920 corp: 33/69b lim: 5 exec/s: 17 rss: 70Mb L: 2/5 MS: 1 ChangeByte- 00:07:35.437 #34 DONE cov: 11784 ft: 14920 corp: 33/69b lim: 5 exec/s: 17 rss: 70Mb 00:07:35.437 Done 34 runs in 2 second(s) 00:07:35.437 23:07:32 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_9.conf 00:07:35.437 23:07:32 -- ../common.sh@72 -- # (( i++ )) 00:07:35.437 23:07:32 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:35.437 23:07:32 -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:07:35.437 23:07:32 -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:07:35.437 23:07:32 -- nvmf/run.sh@24 -- # local timen=1 00:07:35.437 23:07:32 -- nvmf/run.sh@25 -- # local core=0x1 00:07:35.437 23:07:32 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:35.437 23:07:32 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:07:35.437 23:07:32 -- nvmf/run.sh@29 -- # printf %02d 10 00:07:35.437 23:07:32 -- nvmf/run.sh@29 -- # port=4410 00:07:35.437 23:07:32 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:35.696 23:07:32 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:07:35.696 23:07:32 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:35.696 23:07:32 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 -r /var/tmp/spdk10.sock 00:07:35.696 [2024-11-17 23:07:32.081319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:35.697 [2024-11-17 23:07:32.081386] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301204 ] 00:07:35.697 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.697 [2024-11-17 23:07:32.260166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.956 [2024-11-17 23:07:32.325589] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.956 [2024-11-17 23:07:32.325719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.956 [2024-11-17 23:07:32.383600] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.956 [2024-11-17 23:07:32.399879] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:07:35.956 INFO: Running with entropic power schedule (0xFF, 100). 00:07:35.956 INFO: Seed: 3144372162 00:07:35.956 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:35.956 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:35.956 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:07:35.956 INFO: A corpus is not provided, starting from an empty corpus 00:07:35.956 #2 INITED exec/s: 0 rss: 60Mb 00:07:35.956 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:35.956 This may also happen if the target rejected all inputs we tried so far 00:07:35.956 [2024-11-17 23:07:32.444734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.956 [2024-11-17 23:07:32.444768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:35.956 [2024-11-17 23:07:32.444801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.956 [2024-11-17 23:07:32.444816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:35.956 [2024-11-17 23:07:32.444845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.956 [2024-11-17 23:07:32.444861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:35.956 [2024-11-17 23:07:32.444890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:35.956 [2024-11-17 23:07:32.444904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.219 NEW_FUNC[1/670]: 0x447688 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:07:36.219 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:36.219 #11 NEW cov: 11580 ft: 11581 corp: 2/39b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 4 CopyPart-ChangeBit-CopyPart-InsertRepeatedBytes- 00:07:36.219 [2024-11-17 23:07:32.765454] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.219 [2024-11-17 23:07:32.765494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.219 [2024-11-17 23:07:32.765529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.219 [2024-11-17 23:07:32.765552] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.219 [2024-11-17 23:07:32.765583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.219 [2024-11-17 23:07:32.765598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.219 [2024-11-17 23:07:32.765633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.219 [2024-11-17 23:07:32.765648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.219 #12 NEW cov: 11693 ft: 12157 corp: 3/77b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 ChangeBit- 00:07:36.565 [2024-11-17 23:07:32.835507] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.835548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.565 [2024-11-17 23:07:32.835585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.835601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.565 [2024-11-17 23:07:32.835632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.835648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.565 #17 NEW cov: 11699 ft: 12821 corp: 4/106b lim: 40 exec/s: 0 rss: 68Mb L: 29/38 MS: 5 CrossOver-InsertByte-ShuffleBytes-EraseBytes-CrossOver- 00:07:36.565 [2024-11-17 23:07:32.895615] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a282828 cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.895647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.565 [2024-11-17 23:07:32.895681] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:28282828 cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.895707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.565 [2024-11-17 23:07:32.895736] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:28282828 cdw11:28282828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.895753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.565 #18 NEW cov: 11784 ft: 13084 corp: 5/132b lim: 40 exec/s: 0 rss: 68Mb L: 26/38 MS: 1 InsertRepeatedBytes- 00:07:36.565 [2024-11-17 23:07:32.945758] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.565 [2024-11-17 23:07:32.945789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:32.945822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:32.945837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:32.945866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:32.945881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:32.945909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d252 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:32.945924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.566 #19 NEW cov: 11784 ft: 13265 corp: 6/170b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 ChangeBit- 00:07:36.566 [2024-11-17 23:07:33.005951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.005982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.006016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.006032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.006063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.006078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.006108] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d200 cdw11:0000d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.006124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.566 #20 NEW cov: 11784 ft: 13356 corp: 7/202b lim: 40 exec/s: 0 rss: 68Mb L: 32/38 MS: 1 InsertRepeatedBytes- 00:07:36.566 [2024-11-17 23:07:33.066081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.066111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.066144] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.066160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.066188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.066203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.066231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2c2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.066245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.566 #21 NEW cov: 11784 ft: 13410 corp: 8/240b lim: 40 exec/s: 0 rss: 68Mb L: 38/38 MS: 1 ChangeBit- 00:07:36.566 [2024-11-17 23:07:33.116245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.116276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.116310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.116325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.116355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.116376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.566 [2024-11-17 23:07:33.116406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2c2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.566 [2024-11-17 23:07:33.116421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.859 #22 NEW cov: 11784 ft: 13459 corp: 9/279b lim: 40 exec/s: 0 rss: 68Mb L: 39/39 MS: 1 InsertByte- 00:07:36.859 [2024-11-17 23:07:33.186453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.186486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.186521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.186547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.186579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.186595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.186626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.186641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.859 #23 NEW cov: 11784 ft: 13488 corp: 10/317b lim: 40 exec/s: 0 rss: 68Mb L: 38/39 MS: 1 ShuffleBytes- 00:07:36.859 [2024-11-17 23:07:33.236538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.236584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.236618] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.236634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.236665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.236681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.236711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d200 cdw11:005b00d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.236726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.859 #24 NEW cov: 11784 ft: 13648 corp: 11/350b lim: 40 exec/s: 0 rss: 69Mb L: 33/39 MS: 1 InsertByte- 00:07:36.859 [2024-11-17 23:07:33.296665] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.296695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.296729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.296749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.296779] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.296795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.859 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:36.859 #25 NEW cov: 11801 ft: 13721 corp: 12/376b lim: 40 exec/s: 0 rss: 69Mb L: 26/39 MS: 1 InsertRepeatedBytes- 00:07:36.859 [2024-11-17 23:07:33.346930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a0a1ad2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.346961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.346995] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.347011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.347041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.859 [2024-11-17 23:07:33.347057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.859 [2024-11-17 23:07:33.347087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2c2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.860 [2024-11-17 23:07:33.347102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:36.860 [2024-11-17 23:07:33.347132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:d2d2d2d2 cdw11:8ad2d21a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.860 [2024-11-17 23:07:33.347147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:36.860 #26 NEW cov: 11801 ft: 13787 corp: 13/416b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:36.860 [2024-11-17 23:07:33.416957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:0a0a1ad2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.860 [2024-11-17 23:07:33.416986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:36.860 [2024-11-17 23:07:33.417019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.860 [2024-11-17 23:07:33.417034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:36.860 [2024-11-17 23:07:33.417063] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.860 [2024-11-17 23:07:33.417077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:36.860 #27 NEW cov: 11801 ft: 13806 corp: 14/446b lim: 40 exec/s: 27 rss: 69Mb L: 30/40 MS: 1 CrossOver- 00:07:37.119 [2024-11-17 23:07:33.477188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.477218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.477258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.477273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.477301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.477317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.477345] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d208 cdw11:005b00d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.477359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.119 #28 NEW cov: 11801 ft: 13830 corp: 15/479b lim: 40 exec/s: 28 rss: 69Mb L: 33/40 MS: 1 CMP- DE: "\010\000"- 00:07:37.119 [2024-11-17 23:07:33.537309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d252d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.537340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.537374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.537390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.537420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.537436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.119 #29 NEW cov: 11801 ft: 13902 corp: 16/508b lim: 40 exec/s: 29 rss: 69Mb L: 29/40 MS: 1 ChangeBit- 00:07:37.119 [2024-11-17 23:07:33.597359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a525212 cdw11:525252ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.597390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.119 #32 NEW cov: 11801 ft: 14326 corp: 17/523b lim: 40 exec/s: 32 rss: 69Mb L: 15/40 MS: 3 InsertRepeatedBytes-ChangeBit-CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:37.119 [2024-11-17 23:07:33.647560] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a525212 cdw11:5252c452 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.647592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.647626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.647642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.119 #33 NEW cov: 11801 ft: 14526 corp: 18/539b lim: 40 exec/s: 33 rss: 69Mb L: 16/40 MS: 1 InsertByte- 00:07:37.119 [2024-11-17 23:07:33.717755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d20900d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.717785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.717818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.717837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.119 [2024-11-17 23:07:33.717866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.119 [2024-11-17 23:07:33.717880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.379 #34 NEW cov: 11801 ft: 14546 corp: 19/570b lim: 40 exec/s: 34 rss: 69Mb L: 31/40 MS: 1 CMP- DE: "\011\000"- 00:07:37.379 [2024-11-17 23:07:33.767895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d20900d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.767926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.379 [2024-11-17 23:07:33.767959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:0900d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.767974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.379 [2024-11-17 23:07:33.768003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.768018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.379 #35 NEW cov: 11801 ft: 14591 corp: 20/601b lim: 40 exec/s: 35 rss: 69Mb L: 31/40 MS: 1 CopyPart- 00:07:37.379 [2024-11-17 23:07:33.837998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a3fffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.838028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.379 #39 NEW cov: 11801 ft: 14599 corp: 21/611b lim: 40 exec/s: 39 rss: 69Mb L: 10/40 MS: 4 InsertByte-ChangeBinInt-ChangeBit-PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:37.379 [2024-11-17 23:07:33.888286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d20900d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.888316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.379 [2024-11-17 23:07:33.888350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d208 cdw11:00d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.888366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.379 [2024-11-17 23:07:33.888397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.888413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.379 #40 NEW cov: 11801 ft: 14617 corp: 22/642b lim: 40 exec/s: 40 rss: 69Mb L: 31/40 MS: 1 PersAutoDict- DE: "\010\000"- 00:07:37.379 [2024-11-17 23:07:33.938333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.938363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.379 [2024-11-17 23:07:33.938395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.938414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.379 [2024-11-17 23:07:33.938443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:11111111 cdw11:11111111 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.379 [2024-11-17 23:07:33.938458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.379 #46 NEW cov: 11801 ft: 14625 corp: 23/668b lim: 40 exec/s: 46 rss: 69Mb L: 26/40 MS: 1 ShuffleBytes- 00:07:37.638 [2024-11-17 23:07:34.008512] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d20900d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.008548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.638 [2024-11-17 23:07:34.008598] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.008614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.638 [2024-11-17 23:07:34.008645] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.008661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.638 #47 NEW cov: 11801 ft: 14706 corp: 24/699b lim: 40 exec/s: 47 rss: 69Mb L: 31/40 MS: 1 ChangeBit- 00:07:37.638 [2024-11-17 23:07:34.058649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d252d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.058678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.638 [2024-11-17 23:07:34.058711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.058726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.638 [2024-11-17 23:07:34.058755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2cfd2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.058770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.638 #48 NEW cov: 11801 ft: 14721 corp: 25/728b lim: 40 exec/s: 48 rss: 69Mb L: 29/40 MS: 1 ChangeByte- 00:07:37.638 [2024-11-17 23:07:34.128860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d20900d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.128891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.638 [2024-11-17 23:07:34.128925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d208 cdw11:00d2d20a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.128941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.638 [2024-11-17 23:07:34.128972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.128988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.638 #49 NEW cov: 11801 ft: 14757 corp: 26/756b lim: 40 exec/s: 49 rss: 70Mb L: 28/40 MS: 1 CrossOver- 00:07:37.638 [2024-11-17 23:07:34.198939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a525212 cdw11:52d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.638 [2024-11-17 23:07:34.198971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.638 #50 NEW cov: 11801 ft: 14820 corp: 27/771b lim: 40 exec/s: 50 rss: 70Mb L: 15/40 MS: 1 CrossOver- 00:07:37.898 [2024-11-17 23:07:34.259141] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a525212 cdw11:5252c452 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.259173] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.898 [2024-11-17 23:07:34.259207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.259224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.898 #51 NEW cov: 11801 ft: 14859 corp: 28/787b lim: 40 exec/s: 51 rss: 70Mb L: 16/40 MS: 1 CopyPart- 00:07:37.898 [2024-11-17 23:07:34.329397] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d252d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.329428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.898 [2024-11-17 23:07:34.329460] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.329475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.898 [2024-11-17 23:07:34.329503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.329518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.898 [2024-11-17 23:07:34.329553] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.329585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:37.898 #52 NEW cov: 11808 ft: 14899 corp: 29/819b lim: 40 exec/s: 52 rss: 70Mb L: 32/40 MS: 1 CopyPart- 00:07:37.898 [2024-11-17 23:07:34.379418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:0a525212 cdw11:52d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.898 [2024-11-17 23:07:34.379449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.898 #53 NEW cov: 11808 ft: 14905 corp: 30/834b lim: 40 exec/s: 53 rss: 70Mb L: 15/40 MS: 1 ChangeBit- 00:07:37.898 [2024-11-17 23:07:34.449695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.899 [2024-11-17 23:07:34.449727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:37.899 [2024-11-17 23:07:34.449761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.899 [2024-11-17 23:07:34.449777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:37.899 [2024-11-17 23:07:34.449808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:d2d2d2d2 cdw11:d2d2d2d2 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.899 [2024-11-17 23:07:34.449827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:37.899 #54 NEW cov: 11808 ft: 14930 corp: 31/863b lim: 40 exec/s: 27 rss: 70Mb L: 29/40 MS: 1 EraseBytes- 00:07:37.899 #54 DONE cov: 11808 ft: 14930 corp: 31/863b lim: 40 exec/s: 27 rss: 70Mb 00:07:37.899 ###### Recommended dictionary. ###### 00:07:37.899 "\010\000" # Uses: 1 00:07:37.899 "\377\377\377\377\377\377\377\377" # Uses: 1 00:07:37.899 "\011\000" # Uses: 0 00:07:37.899 ###### End of recommended dictionary. ###### 00:07:37.899 Done 54 runs in 2 second(s) 00:07:38.158 23:07:34 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_10.conf 00:07:38.158 23:07:34 -- ../common.sh@72 -- # (( i++ )) 00:07:38.158 23:07:34 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:38.158 23:07:34 -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:07:38.158 23:07:34 -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:07:38.158 23:07:34 -- nvmf/run.sh@24 -- # local timen=1 00:07:38.158 23:07:34 -- nvmf/run.sh@25 -- # local core=0x1 00:07:38.158 23:07:34 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:38.158 23:07:34 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:07:38.158 23:07:34 -- nvmf/run.sh@29 -- # printf %02d 11 00:07:38.158 23:07:34 -- nvmf/run.sh@29 -- # port=4411 00:07:38.158 23:07:34 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:38.158 23:07:34 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:07:38.158 23:07:34 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:38.159 23:07:34 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 -r /var/tmp/spdk11.sock 00:07:38.159 [2024-11-17 23:07:34.659803] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.159 [2024-11-17 23:07:34.659869] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301723 ] 00:07:38.159 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.419 [2024-11-17 23:07:34.841474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.419 [2024-11-17 23:07:34.905251] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.419 [2024-11-17 23:07:34.905377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.419 [2024-11-17 23:07:34.963210] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.419 [2024-11-17 23:07:34.979538] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:07:38.419 INFO: Running with entropic power schedule (0xFF, 100). 00:07:38.419 INFO: Seed: 1428412598 00:07:38.419 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:38.419 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:38.419 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:07:38.419 INFO: A corpus is not provided, starting from an empty corpus 00:07:38.419 #2 INITED exec/s: 0 rss: 60Mb 00:07:38.419 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:38.419 This may also happen if the target rejected all inputs we tried so far 00:07:38.678 [2024-11-17 23:07:35.045784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.678 [2024-11-17 23:07:35.045821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.938 NEW_FUNC[1/671]: 0x4493f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:07:38.938 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:38.938 #12 NEW cov: 11592 ft: 11593 corp: 2/16b lim: 40 exec/s: 0 rss: 68Mb L: 15/15 MS: 5 ShuffleBytes-ChangeBit-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:38.938 [2024-11-17 23:07:35.356833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.356871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.938 [2024-11-17 23:07:35.357010] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.357028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.938 #21 NEW cov: 11705 ft: 12911 corp: 3/35b lim: 40 exec/s: 0 rss: 68Mb L: 19/19 MS: 4 CopyPart-ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:38.938 [2024-11-17 23:07:35.406640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adadadad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.406669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.938 #23 NEW cov: 11711 ft: 13083 corp: 4/47b lim: 40 exec/s: 0 rss: 68Mb L: 12/19 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:38.938 [2024-11-17 23:07:35.456729] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30301030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.456758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.938 #24 NEW cov: 11796 ft: 13314 corp: 5/62b lim: 40 exec/s: 0 rss: 68Mb L: 15/19 MS: 1 ChangeBit- 00:07:38.938 [2024-11-17 23:07:35.506902] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.506928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.938 #26 NEW cov: 11796 ft: 13375 corp: 6/73b lim: 40 exec/s: 0 rss: 68Mb L: 11/19 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:38.938 [2024-11-17 23:07:35.548025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.548055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:38.938 [2024-11-17 23:07:35.548184] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.548199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:38.938 [2024-11-17 23:07:35.548335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.548348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:38.938 [2024-11-17 23:07:35.548490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff30 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:38.938 [2024-11-17 23:07:35.548504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.198 #27 NEW cov: 11796 ft: 13795 corp: 7/107b lim: 40 exec/s: 0 rss: 68Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:07:39.198 [2024-11-17 23:07:35.597820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.597851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.598008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:30303030 cdw11:3030300a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.598026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.598173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.598189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.198 #28 NEW cov: 11796 ft: 14021 corp: 8/137b lim: 40 exec/s: 0 rss: 68Mb L: 30/34 MS: 1 CrossOver- 00:07:39.198 [2024-11-17 23:07:35.648276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.648306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.648450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.648468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.648610] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:000000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.648628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.648768] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.648784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.198 #29 NEW cov: 11796 ft: 14314 corp: 9/174b lim: 40 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:07:39.198 [2024-11-17 23:07:35.707577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adadadad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.707606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.198 #30 NEW cov: 11796 ft: 14343 corp: 10/186b lim: 40 exec/s: 0 rss: 68Mb L: 12/37 MS: 1 ShuffleBytes- 00:07:39.198 [2024-11-17 23:07:35.768720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.768752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.768893] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.768909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.769045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.769063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.198 [2024-11-17 23:07:35.769199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff30 cdw11:54303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.198 [2024-11-17 23:07:35.769217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.199 #31 NEW cov: 11796 ft: 14378 corp: 11/220b lim: 40 exec/s: 0 rss: 68Mb L: 34/37 MS: 1 ChangeByte- 00:07:39.458 [2024-11-17 23:07:35.817948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000020 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.817977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.458 #32 NEW cov: 11796 ft: 14420 corp: 12/231b lim: 40 exec/s: 0 rss: 68Mb L: 11/37 MS: 1 ChangeBit- 00:07:39.458 [2024-11-17 23:07:35.879134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.879163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.458 [2024-11-17 23:07:35.879313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.879330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.458 [2024-11-17 23:07:35.879465] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.879484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.458 [2024-11-17 23:07:35.879613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff30 cdw11:5430302c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.879629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.458 #33 NEW cov: 11796 ft: 14433 corp: 13/266b lim: 40 exec/s: 0 rss: 68Mb L: 35/37 MS: 1 InsertByte- 00:07:39.458 [2024-11-17 23:07:35.938381] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30301030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.938410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.458 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:39.458 #34 NEW cov: 11819 ft: 14477 corp: 14/281b lim: 40 exec/s: 0 rss: 69Mb L: 15/37 MS: 1 ChangeBit- 00:07:39.458 [2024-11-17 23:07:35.999425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.999452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.458 [2024-11-17 23:07:35.999604] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.999622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.458 [2024-11-17 23:07:35.999739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.999757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.458 [2024-11-17 23:07:35.999889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:f8ffffce cdw11:abcfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:35.999908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.458 #40 NEW cov: 11819 ft: 14495 corp: 15/315b lim: 40 exec/s: 0 rss: 69Mb L: 34/37 MS: 1 ChangeBinInt- 00:07:39.458 [2024-11-17 23:07:36.048708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.458 [2024-11-17 23:07:36.048737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.458 #41 NEW cov: 11819 ft: 14505 corp: 16/330b lim: 40 exec/s: 41 rss: 69Mb L: 15/37 MS: 1 CopyPart- 00:07:39.717 [2024-11-17 23:07:36.099126] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.099155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.717 [2024-11-17 23:07:36.099304] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff949494 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.099322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.717 #42 NEW cov: 11819 ft: 14520 corp: 17/352b lim: 40 exec/s: 42 rss: 69Mb L: 22/37 MS: 1 InsertRepeatedBytes- 00:07:39.717 [2024-11-17 23:07:36.159554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:d6d6d6d6 cdw11:d6d6d6d6 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.159584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.717 [2024-11-17 23:07:36.159724] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:d6d63030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.159741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.717 [2024-11-17 23:07:36.159868] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:10303030 cdw11:3030b030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.159887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.717 #43 NEW cov: 11819 ft: 14523 corp: 18/377b lim: 40 exec/s: 43 rss: 69Mb L: 25/37 MS: 1 InsertRepeatedBytes- 00:07:39.717 [2024-11-17 23:07:36.219217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ad8dadad cdw11:adadadad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.219244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.717 #54 NEW cov: 11819 ft: 14537 corp: 19/389b lim: 40 exec/s: 54 rss: 69Mb L: 12/37 MS: 1 ChangeBit- 00:07:39.717 [2024-11-17 23:07:36.269297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adad5ead SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.269324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.717 #55 NEW cov: 11819 ft: 14545 corp: 20/401b lim: 40 exec/s: 55 rss: 69Mb L: 12/37 MS: 1 ChangeByte- 00:07:39.717 [2024-11-17 23:07:36.320358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.320384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.717 [2024-11-17 23:07:36.320537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.320553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.717 [2024-11-17 23:07:36.320719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.717 [2024-11-17 23:07:36.320735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.718 [2024-11-17 23:07:36.320874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.718 [2024-11-17 23:07:36.320892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.977 #56 NEW cov: 11819 ft: 14569 corp: 21/438b lim: 40 exec/s: 56 rss: 69Mb L: 37/37 MS: 1 ShuffleBytes- 00:07:39.977 [2024-11-17 23:07:36.380040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4affffff cdw11:ff2fffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.380069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.977 [2024-11-17 23:07:36.380217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.380235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.977 #57 NEW cov: 11819 ft: 14590 corp: 22/458b lim: 40 exec/s: 57 rss: 69Mb L: 20/37 MS: 1 InsertByte- 00:07:39.977 [2024-11-17 23:07:36.430197] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:4affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.430224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.977 [2024-11-17 23:07:36.430358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ff949494 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.430373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.977 #58 NEW cov: 11819 ft: 14607 corp: 23/481b lim: 40 exec/s: 58 rss: 69Mb L: 23/37 MS: 1 InsertByte- 00:07:39.977 [2024-11-17 23:07:36.480809] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.480835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.977 [2024-11-17 23:07:36.480974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.480991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:39.977 [2024-11-17 23:07:36.481125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.481141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:39.977 [2024-11-17 23:07:36.481281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff30 cdw11:302c3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.481296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:39.977 #59 NEW cov: 11819 ft: 14614 corp: 24/514b lim: 40 exec/s: 59 rss: 69Mb L: 33/37 MS: 1 EraseBytes- 00:07:39.977 [2024-11-17 23:07:36.540204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adadadad SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:39.977 [2024-11-17 23:07:36.540234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:39.977 #60 NEW cov: 11819 ft: 14626 corp: 25/526b lim: 40 exec/s: 60 rss: 69Mb L: 12/37 MS: 1 CopyPart- 00:07:40.236 [2024-11-17 23:07:36.590395] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30d0d530 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.236 [2024-11-17 23:07:36.590423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.236 #61 NEW cov: 11819 ft: 14630 corp: 26/541b lim: 40 exec/s: 61 rss: 69Mb L: 15/37 MS: 1 ChangeBinInt- 00:07:40.236 [2024-11-17 23:07:36.640539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:00002b00 cdw11:0000041f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.236 [2024-11-17 23:07:36.640568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 #66 NEW cov: 11819 ft: 14643 corp: 27/549b lim: 40 exec/s: 66 rss: 69Mb L: 8/37 MS: 5 EraseBytes-ChangeByte-ChangeBit-ShuffleBytes-CopyPart- 00:07:40.237 [2024-11-17 23:07:36.690703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:d0cfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.690731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 #67 NEW cov: 11819 ft: 14657 corp: 28/564b lim: 40 exec/s: 67 rss: 69Mb L: 15/37 MS: 1 ChangeBinInt- 00:07:40.237 [2024-11-17 23:07:36.741418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.741446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 [2024-11-17 23:07:36.741580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.741610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.237 [2024-11-17 23:07:36.741744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.741761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.237 #68 NEW cov: 11819 ft: 14678 corp: 29/595b lim: 40 exec/s: 68 rss: 70Mb L: 31/37 MS: 1 EraseBytes- 00:07:40.237 [2024-11-17 23:07:36.791039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:adadadad cdw11:adad5ead SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.791065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 #69 NEW cov: 11819 ft: 14686 corp: 30/607b lim: 40 exec/s: 69 rss: 70Mb L: 12/37 MS: 1 ChangeBit- 00:07:40.237 [2024-11-17 23:07:36.842086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.842114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.237 [2024-11-17 23:07:36.842253] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffff8cff cdw11:ffffff00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.842270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.237 [2024-11-17 23:07:36.842408] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ff0000ff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.842427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.237 [2024-11-17 23:07:36.842564] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffff3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.237 [2024-11-17 23:07:36.842581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.496 #70 NEW cov: 11819 ft: 14720 corp: 31/644b lim: 40 exec/s: 70 rss: 70Mb L: 37/37 MS: 1 ChangeByte- 00:07:40.496 [2024-11-17 23:07:36.901445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.496 [2024-11-17 23:07:36.901474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.496 #71 NEW cov: 11819 ft: 14743 corp: 32/658b lim: 40 exec/s: 71 rss: 70Mb L: 14/37 MS: 1 EraseBytes- 00:07:40.496 [2024-11-17 23:07:36.952481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.496 [2024-11-17 23:07:36.952508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.496 [2024-11-17 23:07:36.952640] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.496 [2024-11-17 23:07:36.952656] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.496 [2024-11-17 23:07:36.952795] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffefff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.496 [2024-11-17 23:07:36.952814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.497 [2024-11-17 23:07:36.952952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:f8ffffce cdw11:abcfcfcf SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.497 [2024-11-17 23:07:36.952969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.497 #72 NEW cov: 11819 ft: 14745 corp: 33/692b lim: 40 exec/s: 72 rss: 70Mb L: 34/37 MS: 1 ChangeBit- 00:07:40.497 [2024-11-17 23:07:37.012728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:30303030 cdw11:30303030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.497 [2024-11-17 23:07:37.012756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:40.497 [2024-11-17 23:07:37.012897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:30ffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.497 [2024-11-17 23:07:37.012917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:40.497 [2024-11-17 23:07:37.013052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.497 [2024-11-17 23:07:37.013070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:40.497 [2024-11-17 23:07:37.013202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffff30 cdw11:302c3030 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:40.497 [2024-11-17 23:07:37.013218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:40.497 #73 NEW cov: 11819 ft: 14757 corp: 34/725b lim: 40 exec/s: 36 rss: 70Mb L: 33/37 MS: 1 CopyPart- 00:07:40.497 #73 DONE cov: 11819 ft: 14757 corp: 34/725b lim: 40 exec/s: 36 rss: 70Mb 00:07:40.497 Done 73 runs in 2 second(s) 00:07:40.756 23:07:37 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_11.conf 00:07:40.756 23:07:37 -- ../common.sh@72 -- # (( i++ )) 00:07:40.756 23:07:37 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:40.756 23:07:37 -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:07:40.756 23:07:37 -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:07:40.756 23:07:37 -- nvmf/run.sh@24 -- # local timen=1 00:07:40.756 23:07:37 -- nvmf/run.sh@25 -- # local core=0x1 00:07:40.756 23:07:37 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:40.756 23:07:37 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:07:40.756 23:07:37 -- nvmf/run.sh@29 -- # printf %02d 12 00:07:40.756 23:07:37 -- nvmf/run.sh@29 -- # port=4412 00:07:40.756 23:07:37 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:40.756 23:07:37 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:07:40.756 23:07:37 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:40.756 23:07:37 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 -r /var/tmp/spdk12.sock 00:07:40.756 [2024-11-17 23:07:37.197359] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:40.756 [2024-11-17 23:07:37.197424] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302042 ] 00:07:40.756 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.016 [2024-11-17 23:07:37.383204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.017 [2024-11-17 23:07:37.447127] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.017 [2024-11-17 23:07:37.447258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.017 [2024-11-17 23:07:37.505169] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.017 [2024-11-17 23:07:37.521481] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:07:41.017 INFO: Running with entropic power schedule (0xFF, 100). 00:07:41.017 INFO: Seed: 3971391601 00:07:41.017 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:41.017 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:41.017 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:07:41.017 INFO: A corpus is not provided, starting from an empty corpus 00:07:41.017 #2 INITED exec/s: 0 rss: 60Mb 00:07:41.017 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:41.017 This may also happen if the target rejected all inputs we tried so far 00:07:41.017 [2024-11-17 23:07:37.597677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.017 [2024-11-17 23:07:37.597712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.584 NEW_FUNC[1/671]: 0x44b168 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:07:41.584 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:41.584 #39 NEW cov: 11590 ft: 11591 corp: 2/9b lim: 40 exec/s: 0 rss: 68Mb L: 8/8 MS: 2 ChangeByte-InsertRepeatedBytes- 00:07:41.584 [2024-11-17 23:07:37.928677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.584 [2024-11-17 23:07:37.928729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.584 #40 NEW cov: 11703 ft: 12192 corp: 3/18b lim: 40 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 InsertByte- 00:07:41.585 [2024-11-17 23:07:37.978531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:0000003c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:37.978561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.585 #41 NEW cov: 11709 ft: 12488 corp: 4/27b lim: 40 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 ChangeBinInt- 00:07:41.585 [2024-11-17 23:07:38.018713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.018738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.585 #42 NEW cov: 11794 ft: 12804 corp: 5/35b lim: 40 exec/s: 0 rss: 68Mb L: 8/9 MS: 1 ShuffleBytes- 00:07:41.585 [2024-11-17 23:07:38.058837] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.058864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.585 #48 NEW cov: 11794 ft: 12955 corp: 6/43b lim: 40 exec/s: 0 rss: 68Mb L: 8/9 MS: 1 ChangeBit- 00:07:41.585 [2024-11-17 23:07:38.098698] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a240000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.098726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.585 #49 NEW cov: 11794 ft: 13025 corp: 7/52b lim: 40 exec/s: 0 rss: 68Mb L: 9/9 MS: 1 InsertByte- 00:07:41.585 [2024-11-17 23:07:38.139620] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.139646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.585 [2024-11-17 23:07:38.139784] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.139801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:41.585 [2024-11-17 23:07:38.139934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.139949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:41.585 #51 NEW cov: 11794 ft: 13838 corp: 8/83b lim: 40 exec/s: 0 rss: 68Mb L: 31/31 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:41.585 [2024-11-17 23:07:38.179189] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:007a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.585 [2024-11-17 23:07:38.179216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.844 #52 NEW cov: 11794 ft: 13892 corp: 9/92b lim: 40 exec/s: 0 rss: 68Mb L: 9/31 MS: 1 InsertByte- 00:07:41.844 [2024-11-17 23:07:38.218954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a240000 cdw11:02000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.844 [2024-11-17 23:07:38.218980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.844 #53 NEW cov: 11794 ft: 13920 corp: 10/101b lim: 40 exec/s: 0 rss: 68Mb L: 9/31 MS: 1 ChangeBit- 00:07:41.844 [2024-11-17 23:07:38.269538] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.844 [2024-11-17 23:07:38.269566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.844 #54 NEW cov: 11794 ft: 14044 corp: 11/110b lim: 40 exec/s: 0 rss: 68Mb L: 9/31 MS: 1 CMP- DE: "\001\000\000\000\000\000\000\002"- 00:07:41.844 [2024-11-17 23:07:38.319601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c000000 cdw11:2c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.844 [2024-11-17 23:07:38.319629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.844 #55 NEW cov: 11794 ft: 14065 corp: 12/121b lim: 40 exec/s: 0 rss: 68Mb L: 11/31 MS: 1 CopyPart- 00:07:41.844 [2024-11-17 23:07:38.369786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.844 [2024-11-17 23:07:38.369814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.844 #66 NEW cov: 11794 ft: 14096 corp: 13/130b lim: 40 exec/s: 0 rss: 69Mb L: 9/31 MS: 1 CrossOver- 00:07:41.844 [2024-11-17 23:07:38.419992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:41.844 [2024-11-17 23:07:38.420019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:41.844 #67 NEW cov: 11794 ft: 14147 corp: 14/139b lim: 40 exec/s: 0 rss: 69Mb L: 9/31 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:07:42.103 [2024-11-17 23:07:38.459944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a0100ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.103 [2024-11-17 23:07:38.459973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.103 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:42.103 #68 NEW cov: 11817 ft: 14194 corp: 15/148b lim: 40 exec/s: 0 rss: 69Mb L: 9/31 MS: 1 ChangeByte- 00:07:42.103 [2024-11-17 23:07:38.520301] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c000000 cdw11:08000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.103 [2024-11-17 23:07:38.520328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.103 #69 NEW cov: 11817 ft: 14211 corp: 16/156b lim: 40 exec/s: 0 rss: 69Mb L: 8/31 MS: 1 ChangeBinInt- 00:07:42.104 [2024-11-17 23:07:38.560805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:040000d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.560832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.560970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.560988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.561114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.561132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.561250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.561270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.104 #72 NEW cov: 11817 ft: 14537 corp: 17/195b lim: 40 exec/s: 72 rss: 69Mb L: 39/39 MS: 3 EraseBytes-ChangeBinInt-InsertRepeatedBytes- 00:07:42.104 [2024-11-17 23:07:38.611391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:040000d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.611418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.611537] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.611553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.611674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.611689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.611810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.611828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.104 #73 NEW cov: 11817 ft: 14571 corp: 18/234b lim: 40 exec/s: 73 rss: 69Mb L: 39/39 MS: 1 CopyPart- 00:07:42.104 [2024-11-17 23:07:38.660777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c000000 cdw11:2c000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.660802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.660947] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.660962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.104 [2024-11-17 23:07:38.661085] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.661102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.104 #74 NEW cov: 11817 ft: 14644 corp: 19/260b lim: 40 exec/s: 74 rss: 69Mb L: 26/39 MS: 1 InsertRepeatedBytes- 00:07:42.104 [2024-11-17 23:07:38.700377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.104 [2024-11-17 23:07:38.700404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.363 #80 NEW cov: 11817 ft: 14658 corp: 20/269b lim: 40 exec/s: 80 rss: 69Mb L: 9/39 MS: 1 ChangeASCIIInt- 00:07:42.363 [2024-11-17 23:07:38.750986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c000000 cdw11:0000002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.363 [2024-11-17 23:07:38.751014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.363 #81 NEW cov: 11817 ft: 14664 corp: 21/278b lim: 40 exec/s: 81 rss: 69Mb L: 9/39 MS: 1 InsertByte- 00:07:42.363 [2024-11-17 23:07:38.801205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24240000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.363 [2024-11-17 23:07:38.801233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.363 #83 NEW cov: 11817 ft: 14688 corp: 22/292b lim: 40 exec/s: 83 rss: 69Mb L: 14/39 MS: 2 EraseBytes-CrossOver- 00:07:42.363 [2024-11-17 23:07:38.861347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00003c00 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.363 [2024-11-17 23:07:38.861375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.363 #84 NEW cov: 11817 ft: 14699 corp: 23/300b lim: 40 exec/s: 84 rss: 69Mb L: 8/39 MS: 1 CrossOver- 00:07:42.363 [2024-11-17 23:07:38.911515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00240000 cdw11:00000032 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.363 [2024-11-17 23:07:38.911549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.363 #85 NEW cov: 11817 ft: 14703 corp: 24/309b lim: 40 exec/s: 85 rss: 69Mb L: 9/39 MS: 1 ShuffleBytes- 00:07:42.363 [2024-11-17 23:07:38.961606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.364 [2024-11-17 23:07:38.961633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.622 #86 NEW cov: 11817 ft: 14744 corp: 25/317b lim: 40 exec/s: 86 rss: 69Mb L: 8/39 MS: 1 ChangeBinInt- 00:07:42.622 [2024-11-17 23:07:39.011794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a0100ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.011822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.623 #87 NEW cov: 11817 ft: 14754 corp: 26/327b lim: 40 exec/s: 87 rss: 69Mb L: 10/39 MS: 1 InsertByte- 00:07:42.623 [2024-11-17 23:07:39.072596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.072624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.623 [2024-11-17 23:07:39.072745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.072764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.623 [2024-11-17 23:07:39.072891] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000041 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.072907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.623 #88 NEW cov: 11817 ft: 14799 corp: 27/358b lim: 40 exec/s: 88 rss: 70Mb L: 31/39 MS: 1 ChangeByte- 00:07:42.623 [2024-11-17 23:07:39.122022] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a0100ff cdw11:01000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.122049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.623 #89 NEW cov: 11817 ft: 14812 corp: 28/367b lim: 40 exec/s: 89 rss: 70Mb L: 9/39 MS: 1 CopyPart- 00:07:42.623 [2024-11-17 23:07:39.162478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:00000031 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.162507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.623 [2024-11-17 23:07:39.162626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.162646] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.623 #90 NEW cov: 11817 ft: 15014 corp: 29/384b lim: 40 exec/s: 90 rss: 70Mb L: 17/39 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:07:42.623 [2024-11-17 23:07:39.222416] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00002400 cdw11:0000007a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.623 [2024-11-17 23:07:39.222443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.882 #91 NEW cov: 11817 ft: 15024 corp: 30/395b lim: 40 exec/s: 91 rss: 70Mb L: 11/39 MS: 1 CopyPart- 00:07:42.882 [2024-11-17 23:07:39.262445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24090000 cdw11:0000003c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.262471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.882 #92 NEW cov: 11817 ft: 15057 corp: 31/404b lim: 40 exec/s: 92 rss: 70Mb L: 9/39 MS: 1 ChangeBinInt- 00:07:42.882 [2024-11-17 23:07:39.302601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:7a0100ff cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.302626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.882 #93 NEW cov: 11817 ft: 15095 corp: 32/414b lim: 40 exec/s: 93 rss: 70Mb L: 10/39 MS: 1 InsertByte- 00:07:42.882 [2024-11-17 23:07:39.343129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:040000d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.343157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.882 [2024-11-17 23:07:39.343294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.343310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.882 [2024-11-17 23:07:39.343435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.343451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.882 [2024-11-17 23:07:39.343579] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:d3d3d3d3 cdw11:d3d3d3d3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.343597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:42.882 #94 NEW cov: 11817 ft: 15168 corp: 33/453b lim: 40 exec/s: 94 rss: 70Mb L: 39/39 MS: 1 ChangeByte- 00:07:42.882 [2024-11-17 23:07:39.383113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000000 cdw11:0000003c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.383139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.882 [2024-11-17 23:07:39.383273] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.882 [2024-11-17 23:07:39.383290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.882 #95 NEW cov: 11817 ft: 15180 corp: 34/470b lim: 40 exec/s: 95 rss: 70Mb L: 17/39 MS: 1 PersAutoDict- DE: "\001\000\000\000\000\000\000\002"- 00:07:42.882 [2024-11-17 23:07:39.422509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:2c000100 cdw11:0002002a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.883 [2024-11-17 23:07:39.422543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.883 #96 NEW cov: 11817 ft: 15255 corp: 35/479b lim: 40 exec/s: 96 rss: 70Mb L: 9/39 MS: 1 ChangeBinInt- 00:07:42.883 [2024-11-17 23:07:39.473930] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:24000014 cdw11:14141414 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.883 [2024-11-17 23:07:39.473958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:42.883 [2024-11-17 23:07:39.474077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:14141414 cdw11:14141414 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.883 [2024-11-17 23:07:39.474092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:42.883 [2024-11-17 23:07:39.474217] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:14141414 cdw11:14141414 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.883 [2024-11-17 23:07:39.474233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:42.883 [2024-11-17 23:07:39.474356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:14141414 cdw11:14140000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:42.883 [2024-11-17 23:07:39.474374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.143 #97 NEW cov: 11817 ft: 15309 corp: 36/515b lim: 40 exec/s: 97 rss: 70Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:07:43.143 [2024-11-17 23:07:39.512857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0c000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.143 [2024-11-17 23:07:39.512884] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.143 #98 NEW cov: 11817 ft: 15313 corp: 37/524b lim: 40 exec/s: 98 rss: 70Mb L: 9/39 MS: 1 CMP- DE: "\014\000\000\000"- 00:07:43.143 [2024-11-17 23:07:39.564025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.143 [2024-11-17 23:07:39.564052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.143 [2024-11-17 23:07:39.564188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.143 [2024-11-17 23:07:39.564205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.143 [2024-11-17 23:07:39.564327] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000041 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:43.143 [2024-11-17 23:07:39.564344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.143 #99 NEW cov: 11817 ft: 15335 corp: 38/555b lim: 40 exec/s: 49 rss: 70Mb L: 31/39 MS: 1 ShuffleBytes- 00:07:43.143 #99 DONE cov: 11817 ft: 15335 corp: 38/555b lim: 40 exec/s: 49 rss: 70Mb 00:07:43.143 ###### Recommended dictionary. ###### 00:07:43.143 "\001\000\000\000\000\000\000\002" # Uses: 3 00:07:43.143 "\014\000\000\000" # Uses: 0 00:07:43.143 ###### End of recommended dictionary. ###### 00:07:43.143 Done 99 runs in 2 second(s) 00:07:43.143 23:07:39 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_12.conf 00:07:43.143 23:07:39 -- ../common.sh@72 -- # (( i++ )) 00:07:43.143 23:07:39 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:43.143 23:07:39 -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:07:43.143 23:07:39 -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:07:43.143 23:07:39 -- nvmf/run.sh@24 -- # local timen=1 00:07:43.143 23:07:39 -- nvmf/run.sh@25 -- # local core=0x1 00:07:43.143 23:07:39 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:43.143 23:07:39 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:07:43.143 23:07:39 -- nvmf/run.sh@29 -- # printf %02d 13 00:07:43.143 23:07:39 -- nvmf/run.sh@29 -- # port=4413 00:07:43.143 23:07:39 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:43.143 23:07:39 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:07:43.143 23:07:39 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:43.143 23:07:39 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 -r /var/tmp/spdk13.sock 00:07:43.143 [2024-11-17 23:07:39.743978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:43.143 [2024-11-17 23:07:39.744044] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302580 ] 00:07:43.404 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.404 [2024-11-17 23:07:39.916465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.404 [2024-11-17 23:07:39.981130] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.404 [2024-11-17 23:07:39.981256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.664 [2024-11-17 23:07:40.040258] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.664 [2024-11-17 23:07:40.056592] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:07:43.664 INFO: Running with entropic power schedule (0xFF, 100). 00:07:43.664 INFO: Seed: 2210449348 00:07:43.664 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:43.664 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:43.664 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:07:43.664 INFO: A corpus is not provided, starting from an empty corpus 00:07:43.664 #2 INITED exec/s: 0 rss: 60Mb 00:07:43.664 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:43.664 This may also happen if the target rejected all inputs we tried so far 00:07:43.664 [2024-11-17 23:07:40.102145] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.664 [2024-11-17 23:07:40.102176] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.664 [2024-11-17 23:07:40.102239] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.664 [2024-11-17 23:07:40.102254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.664 [2024-11-17 23:07:40.102312] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.664 [2024-11-17 23:07:40.102327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.924 NEW_FUNC[1/670]: 0x44cd38 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:07:43.924 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:43.924 #3 NEW cov: 11578 ft: 11579 corp: 2/29b lim: 40 exec/s: 0 rss: 68Mb L: 28/28 MS: 1 InsertRepeatedBytes- 00:07:43.924 [2024-11-17 23:07:40.422892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.422927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.422987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.423006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.423071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.423086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.924 #4 NEW cov: 11691 ft: 11874 corp: 3/57b lim: 40 exec/s: 0 rss: 68Mb L: 28/28 MS: 1 CopyPart- 00:07:43.924 [2024-11-17 23:07:40.473088] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.473116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.473174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.473188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.473247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.473262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.473321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.473336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:43.924 #5 NEW cov: 11697 ft: 12640 corp: 4/92b lim: 40 exec/s: 0 rss: 68Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:43.924 [2024-11-17 23:07:40.513209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.513238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.513296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.513311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.513367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.513382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:43.924 [2024-11-17 23:07:40.513437] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:000a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:43.924 [2024-11-17 23:07:40.513451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.184 #6 NEW cov: 11782 ft: 12932 corp: 5/127b lim: 40 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:07:44.184 [2024-11-17 23:07:40.563470] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.563496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.563555] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.563570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.563624] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.563638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.563692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.563706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.563761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.563774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.184 #7 NEW cov: 11782 ft: 13178 corp: 6/167b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:44.184 [2024-11-17 23:07:40.603186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.603212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.603269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.603283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.184 #8 NEW cov: 11782 ft: 13474 corp: 7/184b lim: 40 exec/s: 0 rss: 69Mb L: 17/40 MS: 1 EraseBytes- 00:07:44.184 [2024-11-17 23:07:40.643551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.643577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.643633] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.643647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.643703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.643718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.643775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.643789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.184 #9 NEW cov: 11782 ft: 13648 corp: 8/219b lim: 40 exec/s: 0 rss: 69Mb L: 35/40 MS: 1 ShuffleBytes- 00:07:44.184 [2024-11-17 23:07:40.683690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.683715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.683772] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.683786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.683842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.683855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.184 [2024-11-17 23:07:40.683912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00002300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.683925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.184 #10 NEW cov: 11782 ft: 13699 corp: 9/254b lim: 40 exec/s: 0 rss: 69Mb L: 35/40 MS: 1 ChangeByte- 00:07:44.184 [2024-11-17 23:07:40.723387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.723414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.184 #11 NEW cov: 11782 ft: 14104 corp: 10/263b lim: 40 exec/s: 0 rss: 69Mb L: 9/40 MS: 1 CrossOver- 00:07:44.184 [2024-11-17 23:07:40.763475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.184 [2024-11-17 23:07:40.763501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.184 #12 NEW cov: 11782 ft: 14151 corp: 11/274b lim: 40 exec/s: 0 rss: 69Mb L: 11/40 MS: 1 CopyPart- 00:07:44.444 [2024-11-17 23:07:40.804124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.804149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.804209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.804222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.804279] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.804293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.804348] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.804361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.804418] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000000d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.804435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.444 #13 NEW cov: 11782 ft: 14191 corp: 12/314b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 ChangeByte- 00:07:44.444 [2024-11-17 23:07:40.843881] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.843907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.843967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:0000000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.843981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.444 #14 NEW cov: 11782 ft: 14217 corp: 13/330b lim: 40 exec/s: 0 rss: 69Mb L: 16/40 MS: 1 EraseBytes- 00:07:44.444 [2024-11-17 23:07:40.884294] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.884320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.884378] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.444 [2024-11-17 23:07:40.884392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.444 [2024-11-17 23:07:40.884449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.884463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.445 [2024-11-17 23:07:40.884518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.884538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.445 #15 NEW cov: 11782 ft: 14297 corp: 14/365b lim: 40 exec/s: 0 rss: 69Mb L: 35/40 MS: 1 CopyPart- 00:07:44.445 [2024-11-17 23:07:40.924493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.924518] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.445 [2024-11-17 23:07:40.924580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.924594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.445 [2024-11-17 23:07:40.924651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.924665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.445 [2024-11-17 23:07:40.924721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.924735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.445 [2024-11-17 23:07:40.924790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000000d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.924806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.445 #16 NEW cov: 11782 ft: 14321 corp: 15/405b lim: 40 exec/s: 0 rss: 69Mb L: 40/40 MS: 1 CopyPart- 00:07:44.445 [2024-11-17 23:07:40.964135] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:01050000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:40.964162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.445 #17 NEW cov: 11782 ft: 14350 corp: 16/414b lim: 40 exec/s: 0 rss: 69Mb L: 9/40 MS: 1 ChangeBinInt- 00:07:44.445 [2024-11-17 23:07:41.004202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f8000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:41.004228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.445 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:44.445 #18 NEW cov: 11805 ft: 14428 corp: 17/425b lim: 40 exec/s: 0 rss: 69Mb L: 11/40 MS: 1 ChangeBinInt- 00:07:44.445 [2024-11-17 23:07:41.044339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.445 [2024-11-17 23:07:41.044364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 #19 NEW cov: 11805 ft: 14440 corp: 18/435b lim: 40 exec/s: 0 rss: 69Mb L: 10/40 MS: 1 EraseBytes- 00:07:44.705 [2024-11-17 23:07:41.084981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.085006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.085066] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.085080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.085136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.085150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.085208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.085221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.085278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000000d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.085291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.705 #20 NEW cov: 11805 ft: 14459 corp: 19/475b lim: 40 exec/s: 20 rss: 70Mb L: 40/40 MS: 1 ChangeBit- 00:07:44.705 [2024-11-17 23:07:41.125045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.125071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.125132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.125146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.125205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.125218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.125275] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:24000000 cdw11:00000023 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.125288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.705 #21 NEW cov: 11805 ft: 14468 corp: 20/511b lim: 40 exec/s: 21 rss: 70Mb L: 36/40 MS: 1 InsertByte- 00:07:44.705 [2024-11-17 23:07:41.165129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.165156] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.165213] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.165227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.165285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.165299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.165355] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.165368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.705 #22 NEW cov: 11805 ft: 14483 corp: 21/547b lim: 40 exec/s: 22 rss: 70Mb L: 36/40 MS: 1 CrossOver- 00:07:44.705 [2024-11-17 23:07:41.204827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.204853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 #23 NEW cov: 11805 ft: 14485 corp: 22/561b lim: 40 exec/s: 23 rss: 70Mb L: 14/40 MS: 1 EraseBytes- 00:07:44.705 [2024-11-17 23:07:41.244964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000f800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.244989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 #24 NEW cov: 11805 ft: 14516 corp: 23/572b lim: 40 exec/s: 24 rss: 70Mb L: 11/40 MS: 1 ShuffleBytes- 00:07:44.705 [2024-11-17 23:07:41.285442] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.285467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.285524] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:93000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.285544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.285601] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.285614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.705 [2024-11-17 23:07:41.285669] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.705 [2024-11-17 23:07:41.285681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.705 #25 NEW cov: 11805 ft: 14545 corp: 24/607b lim: 40 exec/s: 25 rss: 70Mb L: 35/40 MS: 1 ChangeByte- 00:07:44.966 [2024-11-17 23:07:41.325641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:09000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.325668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.325726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.325740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.325800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.325814] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.325872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.325885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.325945] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000000d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.325958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.966 #26 NEW cov: 11805 ft: 14569 corp: 25/647b lim: 40 exec/s: 26 rss: 70Mb L: 40/40 MS: 1 ChangeBinInt- 00:07:44.966 [2024-11-17 23:07:41.365806] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:f8000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.365831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.365888] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.365902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.365957] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.365971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.366028] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.366044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.366102] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.366116] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.966 #27 NEW cov: 11805 ft: 14600 corp: 26/687b lim: 40 exec/s: 27 rss: 70Mb L: 40/40 MS: 1 InsertRepeatedBytes- 00:07:44.966 [2024-11-17 23:07:41.405770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.405795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.405856] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:03a42e2f cdw11:5b0f8b00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.405869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.405926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.405939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.405997] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.406011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.966 #28 NEW cov: 11805 ft: 14607 corp: 27/722b lim: 40 exec/s: 28 rss: 70Mb L: 35/40 MS: 1 CMP- DE: "\003\244./[\017\213\000"- 00:07:44.966 [2024-11-17 23:07:41.446032] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.446057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.446115] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000100 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.446129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.446186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.446200] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.446255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.446269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.966 [2024-11-17 23:07:41.446325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:000000d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.446339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:44.966 #29 NEW cov: 11805 ft: 14621 corp: 28/762b lim: 40 exec/s: 29 rss: 70Mb L: 40/40 MS: 1 ShuffleBytes- 00:07:44.966 [2024-11-17 23:07:41.486007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.966 [2024-11-17 23:07:41.486036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.486093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:93000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.486107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.486166] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.486180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.486236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:0001007f cdw11:d1b80ff1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.486250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.967 #30 NEW cov: 11805 ft: 14661 corp: 29/797b lim: 40 exec/s: 30 rss: 70Mb L: 35/40 MS: 1 CMP- DE: "\001\000\177\321\270\017\361U"- 00:07:44.967 [2024-11-17 23:07:41.526152] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000006 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.526178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.526235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.526249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.526305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.526319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.526374] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000024 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.526388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:44.967 #31 NEW cov: 11805 ft: 14675 corp: 30/836b lim: 40 exec/s: 31 rss: 70Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:44.967 [2024-11-17 23:07:41.566276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.566301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.566359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.566374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.566433] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.566447] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:44.967 [2024-11-17 23:07:41.566504] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:44.967 [2024-11-17 23:07:41.566520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.227 #32 NEW cov: 11805 ft: 14680 corp: 31/875b lim: 40 exec/s: 32 rss: 70Mb L: 39/40 MS: 1 InsertRepeatedBytes- 00:07:45.227 [2024-11-17 23:07:41.606554] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.606579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.606636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.606650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.606708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.606722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.606780] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.606794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.606850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000098 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.606864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.227 #33 NEW cov: 11805 ft: 14685 corp: 32/915b lim: 40 exec/s: 33 rss: 70Mb L: 40/40 MS: 1 ChangeByte- 00:07:45.227 [2024-11-17 23:07:41.646494] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.646520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.646583] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.646597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.646655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.646669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.646702] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:f6000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.646715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.227 #34 NEW cov: 11805 ft: 14702 corp: 33/950b lim: 40 exec/s: 34 rss: 70Mb L: 35/40 MS: 1 ChangeBinInt- 00:07:45.227 [2024-11-17 23:07:41.686625] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000006 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.686651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.686710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060606 cdw11:06060000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.686724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.686782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00240000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.686797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.227 [2024-11-17 23:07:41.686852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:23000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.686864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.227 #35 NEW cov: 11805 ft: 14714 corp: 34/983b lim: 40 exec/s: 35 rss: 70Mb L: 33/40 MS: 1 EraseBytes- 00:07:45.227 [2024-11-17 23:07:41.726629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000024 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.227 [2024-11-17 23:07:41.726655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.726713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.726727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.726782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.726796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.228 #36 NEW cov: 11805 ft: 14748 corp: 35/1010b lim: 40 exec/s: 36 rss: 70Mb L: 27/40 MS: 1 CrossOver- 00:07:45.228 [2024-11-17 23:07:41.766847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.766873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.766932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.766946] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.767002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.767016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.767074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.767088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.228 #37 NEW cov: 11805 ft: 14754 corp: 36/1049b lim: 40 exec/s: 37 rss: 70Mb L: 39/40 MS: 1 CopyPart- 00:07:45.228 [2024-11-17 23:07:41.806889] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.806914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.806978] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.806993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.228 [2024-11-17 23:07:41.807052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.228 [2024-11-17 23:07:41.807068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.228 #38 NEW cov: 11805 ft: 14772 corp: 37/1077b lim: 40 exec/s: 38 rss: 70Mb L: 28/40 MS: 1 CopyPart- 00:07:45.487 [2024-11-17 23:07:41.846667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.846694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.487 #39 NEW cov: 11805 ft: 14805 corp: 38/1089b lim: 40 exec/s: 39 rss: 70Mb L: 12/40 MS: 1 EraseBytes- 00:07:45.487 [2024-11-17 23:07:41.887224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.887250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.487 [2024-11-17 23:07:41.887311] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.887325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.487 [2024-11-17 23:07:41.887383] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00007f00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.887397] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.487 [2024-11-17 23:07:41.887462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.887476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.487 #40 NEW cov: 11805 ft: 14838 corp: 39/1128b lim: 40 exec/s: 40 rss: 70Mb L: 39/40 MS: 1 CMP- DE: "\000\000\000\177"- 00:07:45.487 [2024-11-17 23:07:41.927246] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.927272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.487 [2024-11-17 23:07:41.927330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060600 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.927344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.487 [2024-11-17 23:07:41.927404] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.927418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.487 [2024-11-17 23:07:41.927479] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.487 [2024-11-17 23:07:41.927495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.957544] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06b0b0b0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.957570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.957629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:b0b00606 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.957642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.957700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.957714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.957770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.957783] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.957839] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.957854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:45.488 #42 NEW cov: 11805 ft: 14869 corp: 40/1168b lim: 40 exec/s: 42 rss: 70Mb L: 40/40 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:45.488 [2024-11-17 23:07:41.997516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:06060606 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.997548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.997607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:06060000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.997621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.997677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.997691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:41.997748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00050000 cdw11:00f60000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:41.997761] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:45.488 #43 NEW cov: 11805 ft: 14879 corp: 41/1204b lim: 40 exec/s: 43 rss: 70Mb L: 36/40 MS: 1 InsertByte- 00:07:45.488 [2024-11-17 23:07:42.037244] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00050000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:42.037269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.488 #44 NEW cov: 11805 ft: 14884 corp: 42/1213b lim: 40 exec/s: 44 rss: 70Mb L: 9/40 MS: 1 ShuffleBytes- 00:07:45.488 [2024-11-17 23:07:42.077472] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:42.077500] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:45.488 [2024-11-17 23:07:42.077566] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:45.488 [2024-11-17 23:07:42.077581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:45.488 #45 NEW cov: 11805 ft: 14892 corp: 43/1233b lim: 40 exec/s: 22 rss: 70Mb L: 20/40 MS: 1 EraseBytes- 00:07:45.488 #45 DONE cov: 11805 ft: 14892 corp: 43/1233b lim: 40 exec/s: 22 rss: 70Mb 00:07:45.488 ###### Recommended dictionary. ###### 00:07:45.488 "\003\244./[\017\213\000" # Uses: 0 00:07:45.488 "\001\000\177\321\270\017\361U" # Uses: 0 00:07:45.488 "\000\000\000\177" # Uses: 0 00:07:45.488 ###### End of recommended dictionary. ###### 00:07:45.488 Done 45 runs in 2 second(s) 00:07:45.747 23:07:42 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_13.conf 00:07:45.747 23:07:42 -- ../common.sh@72 -- # (( i++ )) 00:07:45.747 23:07:42 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:45.747 23:07:42 -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:07:45.747 23:07:42 -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:07:45.747 23:07:42 -- nvmf/run.sh@24 -- # local timen=1 00:07:45.747 23:07:42 -- nvmf/run.sh@25 -- # local core=0x1 00:07:45.747 23:07:42 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:45.747 23:07:42 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:07:45.747 23:07:42 -- nvmf/run.sh@29 -- # printf %02d 14 00:07:45.747 23:07:42 -- nvmf/run.sh@29 -- # port=4414 00:07:45.747 23:07:42 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:45.747 23:07:42 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:07:45.747 23:07:42 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:45.747 23:07:42 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 -r /var/tmp/spdk14.sock 00:07:45.747 [2024-11-17 23:07:42.252415] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:45.747 [2024-11-17 23:07:42.252483] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303060 ] 00:07:45.747 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.007 [2024-11-17 23:07:42.433206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.007 [2024-11-17 23:07:42.496677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:46.007 [2024-11-17 23:07:42.496802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.007 [2024-11-17 23:07:42.554621] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.007 [2024-11-17 23:07:42.570947] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:07:46.007 INFO: Running with entropic power schedule (0xFF, 100). 00:07:46.007 INFO: Seed: 431455639 00:07:46.007 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:46.007 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:46.007 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:07:46.007 INFO: A corpus is not provided, starting from an empty corpus 00:07:46.007 #2 INITED exec/s: 0 rss: 60Mb 00:07:46.007 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:46.007 This may also happen if the target rejected all inputs we tried so far 00:07:46.007 [2024-11-17 23:07:42.615705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.007 [2024-11-17 23:07:42.615746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.007 [2024-11-17 23:07:42.615780] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.007 [2024-11-17 23:07:42.615797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.527 NEW_FUNC[1/671]: 0x44e908 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:07:46.527 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:46.527 #28 NEW cov: 11572 ft: 11570 corp: 2/15b lim: 35 exec/s: 0 rss: 68Mb L: 14/14 MS: 1 InsertRepeatedBytes- 00:07:46.527 [2024-11-17 23:07:42.936356] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:42.936394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:42.936427] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:42.936442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.527 #29 NEW cov: 11685 ft: 11976 corp: 3/29b lim: 35 exec/s: 0 rss: 69Mb L: 14/14 MS: 1 ChangeByte- 00:07:46.527 [2024-11-17 23:07:43.006514] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.006553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.006586] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.006602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.006630] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.006644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.527 #30 NEW cov: 11691 ft: 12485 corp: 4/53b lim: 35 exec/s: 0 rss: 69Mb L: 24/24 MS: 1 CrossOver- 00:07:46.527 [2024-11-17 23:07:43.056731] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.056764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.056796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.056812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.056840] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.056856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.056884] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.056899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.527 NEW_FUNC[1/1]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:46.527 #42 NEW cov: 11793 ft: 12977 corp: 5/87b lim: 35 exec/s: 0 rss: 69Mb L: 34/34 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:07:46.527 [2024-11-17 23:07:43.106861] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.106892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.106924] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.106939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.106967] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.106983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.107011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.107027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.527 [2024-11-17 23:07:43.107054] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.527 [2024-11-17 23:07:43.107069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.787 #48 NEW cov: 11793 ft: 13180 corp: 6/122b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 InsertByte- 00:07:46.787 [2024-11-17 23:07:43.176912] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.176942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.176974] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.176989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.787 #50 NEW cov: 11793 ft: 13314 corp: 7/138b lim: 35 exec/s: 0 rss: 69Mb L: 16/35 MS: 2 ChangeByte-CrossOver- 00:07:46.787 [2024-11-17 23:07:43.227241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.227273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.227306] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.227322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.227352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.227369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.227399] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.227415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.227449] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.227465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.787 #51 NEW cov: 11793 ft: 13466 corp: 8/173b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:46.787 [2024-11-17 23:07:43.297400] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.297431] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.297463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.297478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.297507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.297522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.297557] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.297573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.297601] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.297617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:46.787 #57 NEW cov: 11793 ft: 13507 corp: 9/208b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CrossOver- 00:07:46.787 [2024-11-17 23:07:43.347324] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.347354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.347386] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.347400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.787 #58 NEW cov: 11793 ft: 13561 corp: 10/222b lim: 35 exec/s: 0 rss: 69Mb L: 14/35 MS: 1 CrossOver- 00:07:46.787 [2024-11-17 23:07:43.397717] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.397752] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.397788] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.397806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.397839] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.397856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.397887] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.397908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:46.787 [2024-11-17 23:07:43.397939] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:46.787 [2024-11-17 23:07:43.397956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.047 #59 NEW cov: 11793 ft: 13634 corp: 11/257b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:47.047 [2024-11-17 23:07:43.447789] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.447821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.047 [2024-11-17 23:07:43.447851] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.447867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.047 [2024-11-17 23:07:43.447894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.447910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.047 [2024-11-17 23:07:43.447938] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.447953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.047 NEW_FUNC[1/1]: 0x1133ce8 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1651 00:07:47.047 #60 NEW cov: 11816 ft: 13761 corp: 12/292b lim: 35 exec/s: 0 rss: 69Mb L: 35/35 MS: 1 ChangeBit- 00:07:47.047 [2024-11-17 23:07:43.497781] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.497813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.047 [2024-11-17 23:07:43.497847] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.497863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.047 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:47.047 #61 NEW cov: 11839 ft: 13874 corp: 13/306b lim: 35 exec/s: 0 rss: 69Mb L: 14/35 MS: 1 ChangeByte- 00:07:47.047 [2024-11-17 23:07:43.568118] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.047 [2024-11-17 23:07:43.568150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 [2024-11-17 23:07:43.568182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.568198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.048 [2024-11-17 23:07:43.568226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.568241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.048 [2024-11-17 23:07:43.568269] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.568288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.048 [2024-11-17 23:07:43.568316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.568331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.048 #62 NEW cov: 11839 ft: 13938 corp: 14/341b lim: 35 exec/s: 62 rss: 69Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:47.048 [2024-11-17 23:07:43.638210] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.638240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.048 [2024-11-17 23:07:43.638273] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.638288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.048 [2024-11-17 23:07:43.638317] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.048 [2024-11-17 23:07:43.638332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.308 #63 NEW cov: 11839 ft: 13971 corp: 15/363b lim: 35 exec/s: 63 rss: 69Mb L: 22/35 MS: 1 CopyPart- 00:07:47.308 [2024-11-17 23:07:43.688412] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.688444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.688474] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.688489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.688518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.688541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.688571] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.688586] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.308 #64 NEW cov: 11839 ft: 13995 corp: 16/398b lim: 35 exec/s: 64 rss: 69Mb L: 35/35 MS: 1 ChangeBinInt- 00:07:47.308 [2024-11-17 23:07:43.748598] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.748630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.748662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.748677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.748706] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.748721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.748754] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.748785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.748814] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.748830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.308 #65 NEW cov: 11839 ft: 14013 corp: 17/433b lim: 35 exec/s: 65 rss: 69Mb L: 35/35 MS: 1 CopyPart- 00:07:47.308 [2024-11-17 23:07:43.798711] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.798742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.798773] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.798789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.798816] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.798831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.798860] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.798874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.798902] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.798916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.308 #66 NEW cov: 11839 ft: 14076 corp: 18/468b lim: 35 exec/s: 66 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:47.308 [2024-11-17 23:07:43.858648] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.858680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.308 #67 NEW cov: 11839 ft: 14860 corp: 19/477b lim: 35 exec/s: 67 rss: 70Mb L: 9/35 MS: 1 CrossOver- 00:07:47.308 [2024-11-17 23:07:43.919090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.919123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.919168] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.919185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.919215] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.919232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.308 [2024-11-17 23:07:43.919262] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.308 [2024-11-17 23:07:43.919286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.576 #68 NEW cov: 11839 ft: 14871 corp: 20/512b lim: 35 exec/s: 68 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:47.576 [2024-11-17 23:07:43.989249] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.576 [2024-11-17 23:07:43.989281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.576 [2024-11-17 23:07:43.989313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.576 [2024-11-17 23:07:43.989329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.577 [2024-11-17 23:07:43.989358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.577 [2024-11-17 23:07:43.989374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.577 [2024-11-17 23:07:43.989403] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.577 [2024-11-17 23:07:43.989418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.577 [2024-11-17 23:07:43.989446] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:000000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.577 [2024-11-17 23:07:43.989461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.577 #69 NEW cov: 11839 ft: 14886 corp: 21/547b lim: 35 exec/s: 69 rss: 70Mb L: 35/35 MS: 1 CMP- DE: "\010\000\000\000"- 00:07:47.577 [2024-11-17 23:07:44.059231] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:80000090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.577 [2024-11-17 23:07:44.059263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.577 [2024-11-17 23:07:44.059295] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000090 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.577 [2024-11-17 23:07:44.059310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.578 #70 NEW cov: 11839 ft: 14941 corp: 22/563b lim: 35 exec/s: 70 rss: 70Mb L: 16/35 MS: 1 InsertRepeatedBytes- 00:07:47.578 [2024-11-17 23:07:44.109504] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.578 [2024-11-17 23:07:44.109540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.578 [2024-11-17 23:07:44.109573] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.578 [2024-11-17 23:07:44.109589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.578 [2024-11-17 23:07:44.109617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.578 [2024-11-17 23:07:44.109633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.578 [2024-11-17 23:07:44.109661] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.578 [2024-11-17 23:07:44.109676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.578 [2024-11-17 23:07:44.109708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.578 [2024-11-17 23:07:44.109724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.578 #71 NEW cov: 11839 ft: 14960 corp: 23/598b lim: 35 exec/s: 71 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:47.578 [2024-11-17 23:07:44.170961] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.578 [2024-11-17 23:07:44.170998] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.578 [2024-11-17 23:07:44.171071] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.579 [2024-11-17 23:07:44.171092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.579 [2024-11-17 23:07:44.171162] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.579 [2024-11-17 23:07:44.171182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.579 [2024-11-17 23:07:44.171250] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.579 [2024-11-17 23:07:44.171273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.579 [2024-11-17 23:07:44.171344] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.579 [2024-11-17 23:07:44.171365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.840 #72 NEW cov: 11839 ft: 14987 corp: 24/633b lim: 35 exec/s: 72 rss: 70Mb L: 35/35 MS: 1 ChangeBit- 00:07:47.840 [2024-11-17 23:07:44.221053] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.221082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.221146] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.221163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.221224] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.221240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.221302] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.221318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.840 #73 NEW cov: 11839 ft: 15032 corp: 25/668b lim: 35 exec/s: 73 rss: 70Mb L: 35/35 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.840 [2024-11-17 23:07:44.260620] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:800000f0 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.260647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.260710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.260727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.840 #74 NEW cov: 11839 ft: 15158 corp: 26/684b lim: 35 exec/s: 74 rss: 70Mb L: 16/35 MS: 1 PersAutoDict- DE: "\010\000\000\000"- 00:07:47.840 [2024-11-17 23:07:44.301187] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.301215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.301278] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.301292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.301352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.301368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.840 #75 NEW cov: 11839 ft: 15194 corp: 27/715b lim: 35 exec/s: 75 rss: 70Mb L: 31/35 MS: 1 EraseBytes- 00:07:47.840 [2024-11-17 23:07:44.340835] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.340861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.340923] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.340937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.840 #78 NEW cov: 11839 ft: 15226 corp: 28/729b lim: 35 exec/s: 78 rss: 70Mb L: 14/35 MS: 3 ChangeByte-ShuffleBytes-CrossOver- 00:07:47.840 [2024-11-17 23:07:44.381090] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.381117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.381182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.381198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.381258] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.381275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.840 #79 NEW cov: 11839 ft: 15241 corp: 29/755b lim: 35 exec/s: 79 rss: 70Mb L: 26/35 MS: 1 EraseBytes- 00:07:47.840 [2024-11-17 23:07:44.421567] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.421596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.421660] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.421677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.421742] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.421764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.421827] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.421843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:47.840 [2024-11-17 23:07:44.421907] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:8 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:47.840 [2024-11-17 23:07:44.421923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:07:47.840 #80 NEW cov: 11839 ft: 15249 corp: 30/790b lim: 35 exec/s: 80 rss: 70Mb L: 35/35 MS: 1 ShuffleBytes- 00:07:48.100 [2024-11-17 23:07:44.461176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.461202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.100 [2024-11-17 23:07:44.461264] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:000000e3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.461278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.100 #81 NEW cov: 11839 ft: 15351 corp: 31/805b lim: 35 exec/s: 81 rss: 70Mb L: 15/35 MS: 1 InsertByte- 00:07:48.100 [2024-11-17 23:07:44.501314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.501340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.100 [2024-11-17 23:07:44.501404] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.501418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.100 #82 NEW cov: 11839 ft: 15357 corp: 32/819b lim: 35 exec/s: 82 rss: 70Mb L: 14/35 MS: 1 ShuffleBytes- 00:07:48.100 [2024-11-17 23:07:44.541241] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.541269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.100 #83 NEW cov: 11839 ft: 15400 corp: 33/828b lim: 35 exec/s: 83 rss: 70Mb L: 9/35 MS: 1 ChangeBit- 00:07:48.100 [2024-11-17 23:07:44.581549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:000000a1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.581575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.100 [2024-11-17 23:07:44.581640] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:8000008a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:48.100 [2024-11-17 23:07:44.581657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.100 #84 NEW cov: 11839 ft: 15404 corp: 34/843b lim: 35 exec/s: 42 rss: 70Mb L: 15/35 MS: 1 CMP- DE: "\030/\212\265a\017\213\000"- 00:07:48.100 #84 DONE cov: 11839 ft: 15404 corp: 34/843b lim: 35 exec/s: 42 rss: 70Mb 00:07:48.100 ###### Recommended dictionary. ###### 00:07:48.100 "\010\000\000\000" # Uses: 2 00:07:48.100 "\030/\212\265a\017\213\000" # Uses: 0 00:07:48.100 ###### End of recommended dictionary. ###### 00:07:48.100 Done 84 runs in 2 second(s) 00:07:48.360 23:07:44 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_14.conf 00:07:48.360 23:07:44 -- ../common.sh@72 -- # (( i++ )) 00:07:48.360 23:07:44 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:48.360 23:07:44 -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:07:48.360 23:07:44 -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:07:48.360 23:07:44 -- nvmf/run.sh@24 -- # local timen=1 00:07:48.360 23:07:44 -- nvmf/run.sh@25 -- # local core=0x1 00:07:48.360 23:07:44 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:48.360 23:07:44 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:07:48.360 23:07:44 -- nvmf/run.sh@29 -- # printf %02d 15 00:07:48.360 23:07:44 -- nvmf/run.sh@29 -- # port=4415 00:07:48.360 23:07:44 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:48.360 23:07:44 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:07:48.360 23:07:44 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:48.360 23:07:44 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 -r /var/tmp/spdk15.sock 00:07:48.360 [2024-11-17 23:07:44.749150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.360 [2024-11-17 23:07:44.749200] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303409 ] 00:07:48.360 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.360 [2024-11-17 23:07:44.923880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.620 [2024-11-17 23:07:44.987110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.620 [2024-11-17 23:07:44.987237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.620 [2024-11-17 23:07:45.045205] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.620 [2024-11-17 23:07:45.061525] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:07:48.620 INFO: Running with entropic power schedule (0xFF, 100). 00:07:48.620 INFO: Seed: 2919468409 00:07:48.620 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:48.620 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:48.620 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:07:48.620 INFO: A corpus is not provided, starting from an empty corpus 00:07:48.620 #2 INITED exec/s: 0 rss: 61Mb 00:07:48.620 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:48.620 This may also happen if the target rejected all inputs we tried so far 00:07:48.620 [2024-11-17 23:07:45.131313] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.620 [2024-11-17 23:07:45.131351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:48.879 NEW_FUNC[1/667]: 0x44fe48 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:07:48.879 NEW_FUNC[2/667]: 0x46fd38 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:07:48.879 #14 NEW cov: 11521 ft: 11508 corp: 2/21b lim: 35 exec/s: 0 rss: 68Mb L: 20/20 MS: 2 CrossOver-InsertRepeatedBytes- 00:07:48.879 [2024-11-17 23:07:45.451117] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:48.879 [2024-11-17 23:07:45.451477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.879 [2024-11-17 23:07:45.451527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:48.879 NEW_FUNC[1/5]: 0xf4b568 in posix_sock_group_impl_poll /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/module/sock/posix/posix.c:1965 00:07:48.879 NEW_FUNC[2/5]: 0x1123968 in nvmf_ctrlr_get_features_host_identifier /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1675 00:07:48.879 #19 NEW cov: 11711 ft: 12313 corp: 3/29b lim: 35 exec/s: 0 rss: 68Mb L: 8/20 MS: 5 InsertByte-EraseBytes-InsertByte-InsertByte-InsertRepeatedBytes- 00:07:48.879 [2024-11-17 23:07:45.491894] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.879 [2024-11-17 23:07:45.491922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.138 #20 NEW cov: 11717 ft: 12765 corp: 4/49b lim: 35 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 ChangeByte- 00:07:49.138 [2024-11-17 23:07:45.532176] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.138 [2024-11-17 23:07:45.532204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.138 [2024-11-17 23:07:45.532326] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.138 [2024-11-17 23:07:45.532343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.138 #21 NEW cov: 11802 ft: 13214 corp: 5/74b lim: 35 exec/s: 0 rss: 68Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:07:49.138 [2024-11-17 23:07:45.572030] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.138 [2024-11-17 23:07:45.572057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.138 #22 NEW cov: 11802 ft: 13262 corp: 6/93b lim: 35 exec/s: 0 rss: 68Mb L: 19/25 MS: 1 EraseBytes- 00:07:49.138 [2024-11-17 23:07:45.612520] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.612548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.139 [2024-11-17 23:07:45.612675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.612691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.139 [2024-11-17 23:07:45.612822] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.612837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.139 #38 NEW cov: 11802 ft: 13779 corp: 7/126b lim: 35 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:49.139 [2024-11-17 23:07:45.652226] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.652252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.139 #39 NEW cov: 11802 ft: 13864 corp: 8/146b lim: 35 exec/s: 0 rss: 69Mb L: 20/33 MS: 1 ShuffleBytes- 00:07:49.139 [2024-11-17 23:07:45.691740] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:49.139 [2024-11-17 23:07:45.692072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.692101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.139 #40 NEW cov: 11802 ft: 13938 corp: 9/154b lim: 35 exec/s: 0 rss: 69Mb L: 8/33 MS: 1 ChangeBinInt- 00:07:49.139 [2024-11-17 23:07:45.732976] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.733003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.139 [2024-11-17 23:07:45.733267] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000063b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.139 [2024-11-17 23:07:45.733284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.398 #41 NEW cov: 11802 ft: 14088 corp: 10/188b lim: 35 exec/s: 0 rss: 69Mb L: 34/34 MS: 1 CrossOver- 00:07:49.398 [2024-11-17 23:07:45.783072] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.783098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.398 [2024-11-17 23:07:45.783228] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.783245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.398 [2024-11-17 23:07:45.783339] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.783356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.398 #47 NEW cov: 11802 ft: 14148 corp: 11/221b lim: 35 exec/s: 0 rss: 69Mb L: 33/34 MS: 1 ChangeBinInt- 00:07:49.398 [2024-11-17 23:07:45.822710] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.822738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.398 #48 NEW cov: 11802 ft: 14192 corp: 12/241b lim: 35 exec/s: 0 rss: 69Mb L: 20/34 MS: 1 ChangeBit- 00:07:49.398 [2024-11-17 23:07:45.862946] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.862972] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.398 #49 NEW cov: 11802 ft: 14223 corp: 13/261b lim: 35 exec/s: 0 rss: 69Mb L: 20/34 MS: 1 CopyPart- 00:07:49.398 [2024-11-17 23:07:45.902996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.903023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.398 #50 NEW cov: 11802 ft: 14238 corp: 14/281b lim: 35 exec/s: 0 rss: 69Mb L: 20/34 MS: 1 ChangeByte- 00:07:49.398 [2024-11-17 23:07:45.942507] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:49.398 [2024-11-17 23:07:45.942876] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.942904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.398 #51 NEW cov: 11802 ft: 14253 corp: 15/289b lim: 35 exec/s: 0 rss: 69Mb L: 8/34 MS: 1 ChangeByte- 00:07:49.398 [2024-11-17 23:07:45.983352] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.983379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.398 [2024-11-17 23:07:45.983508] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.983527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.398 [2024-11-17 23:07:45.983671] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.398 [2024-11-17 23:07:45.983686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.657 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:49.657 #52 NEW cov: 11825 ft: 14300 corp: 16/313b lim: 35 exec/s: 0 rss: 69Mb L: 24/34 MS: 1 InsertRepeatedBytes- 00:07:49.657 [2024-11-17 23:07:46.032847] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:49.657 [2024-11-17 23:07:46.033232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.657 [2024-11-17 23:07:46.033259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.657 #53 NEW cov: 11825 ft: 14310 corp: 17/321b lim: 35 exec/s: 0 rss: 69Mb L: 8/34 MS: 1 ChangeBinInt- 00:07:49.658 [2024-11-17 23:07:46.073537] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.073564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.658 #59 NEW cov: 11825 ft: 14345 corp: 18/341b lim: 35 exec/s: 0 rss: 69Mb L: 20/34 MS: 1 ChangeBinInt- 00:07:49.658 [2024-11-17 23:07:46.114265] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.114294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.658 [2024-11-17 23:07:46.114444] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.114461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.658 [2024-11-17 23:07:46.114603] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.114622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.658 [2024-11-17 23:07:46.154314] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000148 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.154340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.658 [2024-11-17 23:07:46.154468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.154485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.658 [2024-11-17 23:07:46.154632] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.154651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:49.658 #61 NEW cov: 11825 ft: 14355 corp: 19/375b lim: 35 exec/s: 61 rss: 69Mb L: 34/34 MS: 2 InsertByte-ChangeBinInt- 00:07:49.658 NEW_FUNC[1/1]: 0x46e6f8 in feat_interrupt_coalescing /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:325 00:07:49.658 #62 NEW cov: 11847 ft: 14458 corp: 20/383b lim: 35 exec/s: 62 rss: 69Mb L: 8/34 MS: 1 ChangeBinInt- 00:07:49.658 [2024-11-17 23:07:46.244125] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.658 [2024-11-17 23:07:46.244154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.916 #63 NEW cov: 11847 ft: 14505 corp: 21/403b lim: 35 exec/s: 63 rss: 70Mb L: 20/34 MS: 1 ChangeByte- 00:07:49.917 [2024-11-17 23:07:46.294538] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.917 [2024-11-17 23:07:46.294565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.917 [2024-11-17 23:07:46.294708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.917 [2024-11-17 23:07:46.294726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:49.917 #64 NEW cov: 11847 ft: 14548 corp: 22/424b lim: 35 exec/s: 64 rss: 70Mb L: 21/34 MS: 1 InsertByte- 00:07:49.917 [2024-11-17 23:07:46.353782] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:49.917 [2024-11-17 23:07:46.354158] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.917 [2024-11-17 23:07:46.354186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.917 #65 NEW cov: 11847 ft: 14735 corp: 23/433b lim: 35 exec/s: 65 rss: 70Mb L: 9/34 MS: 1 InsertByte- 00:07:49.917 [2024-11-17 23:07:46.393900] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:49.917 [2024-11-17 23:07:46.394286] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.917 [2024-11-17 23:07:46.394314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:49.917 #66 NEW cov: 11847 ft: 14787 corp: 24/441b lim: 35 exec/s: 66 rss: 70Mb L: 8/34 MS: 1 ChangeByte- 00:07:49.917 [2024-11-17 23:07:46.434713] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.917 [2024-11-17 23:07:46.434742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.917 #67 NEW cov: 11847 ft: 14791 corp: 25/461b lim: 35 exec/s: 67 rss: 70Mb L: 20/34 MS: 1 ShuffleBytes- 00:07:49.917 [2024-11-17 23:07:46.474858] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:49.917 [2024-11-17 23:07:46.474885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:49.917 #68 NEW cov: 11847 ft: 14797 corp: 26/481b lim: 35 exec/s: 68 rss: 70Mb L: 20/34 MS: 1 ChangeBit- 00:07:50.175 #69 NEW cov: 11847 ft: 14822 corp: 27/492b lim: 35 exec/s: 69 rss: 70Mb L: 11/34 MS: 1 EraseBytes- 00:07:50.175 [2024-11-17 23:07:46.554420] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:50.175 [2024-11-17 23:07:46.554761] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.175 [2024-11-17 23:07:46.554789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.175 #70 NEW cov: 11847 ft: 14847 corp: 28/501b lim: 35 exec/s: 70 rss: 70Mb L: 9/34 MS: 1 ChangeByte- 00:07:50.175 [2024-11-17 23:07:46.595217] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.175 [2024-11-17 23:07:46.595245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.175 #71 NEW cov: 11847 ft: 14860 corp: 29/517b lim: 35 exec/s: 71 rss: 70Mb L: 16/34 MS: 1 CrossOver- 00:07:50.175 [2024-11-17 23:07:46.634636] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:50.175 [2024-11-17 23:07:46.635019] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.175 [2024-11-17 23:07:46.635047] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.175 #72 NEW cov: 11847 ft: 14867 corp: 30/528b lim: 35 exec/s: 72 rss: 70Mb L: 11/34 MS: 1 CopyPart- 00:07:50.175 [2024-11-17 23:07:46.675834] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.175 [2024-11-17 23:07:46.675860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.175 [2024-11-17 23:07:46.676132] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:0000063b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.175 [2024-11-17 23:07:46.676158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.176 #73 NEW cov: 11847 ft: 14876 corp: 31/562b lim: 35 exec/s: 73 rss: 70Mb L: 34/34 MS: 1 ChangeByte- 00:07:50.176 [2024-11-17 23:07:46.716036] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.176 [2024-11-17 23:07:46.716062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.176 [2024-11-17 23:07:46.716182] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.176 [2024-11-17 23:07:46.716198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.176 [2024-11-17 23:07:46.716320] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.176 [2024-11-17 23:07:46.716336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.176 #74 NEW cov: 11847 ft: 14880 corp: 32/595b lim: 35 exec/s: 74 rss: 70Mb L: 33/34 MS: 1 ChangeByte- 00:07:50.176 [2024-11-17 23:07:46.755622] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.176 [2024-11-17 23:07:46.755651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.176 #75 NEW cov: 11847 ft: 14897 corp: 33/615b lim: 35 exec/s: 75 rss: 70Mb L: 20/34 MS: 1 ChangeBit- 00:07:50.435 [2024-11-17 23:07:46.795193] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:50.435 [2024-11-17 23:07:46.795701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.795729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.435 [2024-11-17 23:07:46.795855] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.795872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.435 #76 NEW cov: 11847 ft: 14908 corp: 34/635b lim: 35 exec/s: 76 rss: 70Mb L: 20/34 MS: 1 CrossOver- 00:07:50.435 [2024-11-17 23:07:46.836305] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.836333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.435 [2024-11-17 23:07:46.836472] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:0000013b SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.836489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.435 [2024-11-17 23:07:46.836615] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.836634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.435 #77 NEW cov: 11847 ft: 14918 corp: 35/668b lim: 35 exec/s: 77 rss: 70Mb L: 33/34 MS: 1 CrossOver- 00:07:50.435 [2024-11-17 23:07:46.875420] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:50.435 [2024-11-17 23:07:46.875799] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.875827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.435 #83 NEW cov: 11847 ft: 14932 corp: 36/676b lim: 35 exec/s: 83 rss: 70Mb L: 8/34 MS: 1 ChangeByte- 00:07:50.435 [2024-11-17 23:07:46.916535] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.916562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.435 [2024-11-17 23:07:46.916701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.916728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.435 [2024-11-17 23:07:46.916838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.916854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.435 [2024-11-17 23:07:46.916968] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000123 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.916983] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.435 #84 NEW cov: 11847 ft: 15085 corp: 37/706b lim: 35 exec/s: 84 rss: 70Mb L: 30/34 MS: 1 CrossOver- 00:07:50.435 [2024-11-17 23:07:46.956289] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000001c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.956317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.435 #85 NEW cov: 11847 ft: 15121 corp: 38/726b lim: 35 exec/s: 85 rss: 70Mb L: 20/34 MS: 1 ChangeByte- 00:07:50.435 [2024-11-17 23:07:46.995717] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:50.435 [2024-11-17 23:07:46.996069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:46.996097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.435 #86 NEW cov: 11847 ft: 15129 corp: 39/736b lim: 35 exec/s: 86 rss: 70Mb L: 10/34 MS: 1 InsertByte- 00:07:50.435 [2024-11-17 23:07:47.036148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.435 [2024-11-17 23:07:47.036175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.695 #87 NEW cov: 11847 ft: 15137 corp: 40/744b lim: 35 exec/s: 87 rss: 70Mb L: 8/34 MS: 1 ChangeBinInt- 00:07:50.695 [2024-11-17 23:07:47.076159] ctrlr.c:1685:nvmf_ctrlr_get_features_host_identifier: *ERROR*: Get Features - Host Identifier with EXHID=0 not allowed 00:07:50.695 [2024-11-17 23:07:47.076810] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES HOST IDENTIFIER cid:4 cdw10:00000181 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.076837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.695 [2024-11-17 23:07:47.076971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.076986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.695 [2024-11-17 23:07:47.077106] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.077124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.695 #88 NEW cov: 11847 ft: 15214 corp: 41/769b lim: 35 exec/s: 88 rss: 70Mb L: 25/34 MS: 1 CrossOver- 00:07:50.695 [2024-11-17 23:07:47.117068] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.117094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:07:50.695 [2024-11-17 23:07:47.117232] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.117248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:07:50.695 [2024-11-17 23:07:47.117375] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000006c0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.117391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:07:50.695 [2024-11-17 23:07:47.117515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000006d0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:50.695 [2024-11-17 23:07:47.117530] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:07:50.695 #89 NEW cov: 11847 ft: 15222 corp: 42/799b lim: 35 exec/s: 44 rss: 70Mb L: 30/34 MS: 1 InsertRepeatedBytes- 00:07:50.695 #89 DONE cov: 11847 ft: 15222 corp: 42/799b lim: 35 exec/s: 44 rss: 70Mb 00:07:50.695 Done 89 runs in 2 second(s) 00:07:50.695 23:07:47 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_15.conf 00:07:50.695 23:07:47 -- ../common.sh@72 -- # (( i++ )) 00:07:50.695 23:07:47 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:50.695 23:07:47 -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:07:50.695 23:07:47 -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:07:50.695 23:07:47 -- nvmf/run.sh@24 -- # local timen=1 00:07:50.695 23:07:47 -- nvmf/run.sh@25 -- # local core=0x1 00:07:50.695 23:07:47 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:50.695 23:07:47 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:07:50.695 23:07:47 -- nvmf/run.sh@29 -- # printf %02d 16 00:07:50.695 23:07:47 -- nvmf/run.sh@29 -- # port=4416 00:07:50.695 23:07:47 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:50.695 23:07:47 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:07:50.695 23:07:47 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:50.695 23:07:47 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 -r /var/tmp/spdk16.sock 00:07:50.695 [2024-11-17 23:07:47.294755] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.695 [2024-11-17 23:07:47.294821] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303952 ] 00:07:50.954 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.954 [2024-11-17 23:07:47.472004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.954 [2024-11-17 23:07:47.537807] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.954 [2024-11-17 23:07:47.537931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.213 [2024-11-17 23:07:47.596107] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.213 [2024-11-17 23:07:47.612390] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:07:51.213 INFO: Running with entropic power schedule (0xFF, 100). 00:07:51.213 INFO: Seed: 1175485314 00:07:51.213 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:51.213 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:51.213 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:07:51.213 INFO: A corpus is not provided, starting from an empty corpus 00:07:51.213 #2 INITED exec/s: 0 rss: 60Mb 00:07:51.213 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:51.213 This may also happen if the target rejected all inputs we tried so far 00:07:51.213 [2024-11-17 23:07:47.671048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.213 [2024-11-17 23:07:47.671080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.213 [2024-11-17 23:07:47.671124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.213 [2024-11-17 23:07:47.671140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.213 [2024-11-17 23:07:47.671195] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.213 [2024-11-17 23:07:47.671211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.473 NEW_FUNC[1/671]: 0x451308 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:07:51.473 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:51.473 #9 NEW cov: 11663 ft: 11643 corp: 2/84b lim: 105 exec/s: 0 rss: 68Mb L: 83/83 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:07:51.473 [2024-11-17 23:07:47.991595] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:47.991636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.473 #13 NEW cov: 11776 ft: 12707 corp: 3/119b lim: 105 exec/s: 0 rss: 68Mb L: 35/83 MS: 4 InsertByte-ChangeBit-ChangeBinInt-InsertRepeatedBytes- 00:07:51.473 [2024-11-17 23:07:48.031747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:48.031775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.473 [2024-11-17 23:07:48.031835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:48.031852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.473 #14 NEW cov: 11782 ft: 13122 corp: 4/171b lim: 105 exec/s: 0 rss: 68Mb L: 52/83 MS: 1 CopyPart- 00:07:51.473 [2024-11-17 23:07:48.072044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:48.072073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.473 [2024-11-17 23:07:48.072116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:48.072131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.473 [2024-11-17 23:07:48.072184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:48.072199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.473 [2024-11-17 23:07:48.072252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.473 [2024-11-17 23:07:48.072267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.732 #20 NEW cov: 11867 ft: 13929 corp: 5/267b lim: 105 exec/s: 0 rss: 68Mb L: 96/96 MS: 1 InsertRepeatedBytes- 00:07:51.732 [2024-11-17 23:07:48.111996] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.112025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.112082] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.112097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.732 #21 NEW cov: 11867 ft: 14097 corp: 6/312b lim: 105 exec/s: 0 rss: 68Mb L: 45/96 MS: 1 CrossOver- 00:07:51.732 [2024-11-17 23:07:48.152175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.152203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.152241] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.152257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.152310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.152326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.732 #22 NEW cov: 11867 ft: 14175 corp: 7/384b lim: 105 exec/s: 0 rss: 68Mb L: 72/96 MS: 1 EraseBytes- 00:07:51.732 [2024-11-17 23:07:48.192049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.192082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.732 #23 NEW cov: 11867 ft: 14385 corp: 8/419b lim: 105 exec/s: 0 rss: 68Mb L: 35/96 MS: 1 ShuffleBytes- 00:07:51.732 [2024-11-17 23:07:48.232426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.232455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.232492] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.232509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.232567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207264 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.232581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.732 #24 NEW cov: 11867 ft: 14455 corp: 9/492b lim: 105 exec/s: 0 rss: 68Mb L: 73/96 MS: 1 InsertByte- 00:07:51.732 [2024-11-17 23:07:48.272546] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.272574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.272613] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.272628] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.272683] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207264 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.272699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.732 #25 NEW cov: 11867 ft: 14563 corp: 10/566b lim: 105 exec/s: 0 rss: 68Mb L: 74/96 MS: 1 InsertByte- 00:07:51.732 [2024-11-17 23:07:48.312680] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.312708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.312751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.312767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.732 [2024-11-17 23:07:48.312820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207264 len:18945 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.732 [2024-11-17 23:07:48.312835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.732 #26 NEW cov: 11867 ft: 14592 corp: 11/640b lim: 105 exec/s: 0 rss: 68Mb L: 74/96 MS: 1 ChangeBinInt- 00:07:51.991 [2024-11-17 23:07:48.352539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.991 [2024-11-17 23:07:48.352567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.991 #27 NEW cov: 11867 ft: 14683 corp: 12/675b lim: 105 exec/s: 0 rss: 68Mb L: 35/96 MS: 1 ChangeBit- 00:07:51.991 [2024-11-17 23:07:48.393038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.991 [2024-11-17 23:07:48.393066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.991 [2024-11-17 23:07:48.393112] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.991 [2024-11-17 23:07:48.393128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.991 [2024-11-17 23:07:48.393179] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207264 len:19039 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.991 [2024-11-17 23:07:48.393195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.991 [2024-11-17 23:07:48.393248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.991 [2024-11-17 23:07:48.393262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.991 #28 NEW cov: 11867 ft: 14714 corp: 13/779b lim: 105 exec/s: 0 rss: 69Mb L: 104/104 MS: 1 CrossOver- 00:07:51.991 [2024-11-17 23:07:48.433156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.433184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.433223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.433238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.433292] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.433308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.433361] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.433377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.992 #29 NEW cov: 11867 ft: 14761 corp: 14/875b lim: 105 exec/s: 0 rss: 69Mb L: 96/104 MS: 1 ShuffleBytes- 00:07:51.992 [2024-11-17 23:07:48.483335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18423663121508859903 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.483363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.483405] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.483420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.483474] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.483490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.483551] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.483567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.992 #30 NEW cov: 11867 ft: 14776 corp: 15/972b lim: 105 exec/s: 0 rss: 69Mb L: 97/104 MS: 1 InsertByte- 00:07:51.992 [2024-11-17 23:07:48.523502] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.523531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.523584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.523600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.523652] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6944091433278004830 len:24139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.523667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.523720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.523735] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:51.992 [2024-11-17 23:07:48.523789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:403726925824 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.523805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:51.992 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:51.992 #31 NEW cov: 11890 ft: 14851 corp: 16/1077b lim: 105 exec/s: 0 rss: 69Mb L: 105/105 MS: 1 CopyPart- 00:07:51.992 [2024-11-17 23:07:48.573209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.992 [2024-11-17 23:07:48.573237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:51.992 #32 NEW cov: 11890 ft: 14866 corp: 17/1112b lim: 105 exec/s: 0 rss: 69Mb L: 35/105 MS: 1 CrossOver- 00:07:52.251 [2024-11-17 23:07:48.613288] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.613316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 #33 NEW cov: 11890 ft: 14894 corp: 18/1135b lim: 105 exec/s: 0 rss: 69Mb L: 23/105 MS: 1 InsertRepeatedBytes- 00:07:52.251 [2024-11-17 23:07:48.653669] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.653697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.653735] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.653750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.653805] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.653823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.251 #34 NEW cov: 11890 ft: 14932 corp: 19/1207b lim: 105 exec/s: 34 rss: 69Mb L: 72/105 MS: 1 ChangeBinInt- 00:07:52.251 [2024-11-17 23:07:48.693751] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.693779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.693816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.693831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.693883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.693899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.251 #35 NEW cov: 11890 ft: 14982 corp: 20/1290b lim: 105 exec/s: 35 rss: 69Mb L: 83/105 MS: 1 ShuffleBytes- 00:07:52.251 [2024-11-17 23:07:48.733632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.733659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 #36 NEW cov: 11890 ft: 15055 corp: 21/1317b lim: 105 exec/s: 36 rss: 69Mb L: 27/105 MS: 1 InsertRepeatedBytes- 00:07:52.251 [2024-11-17 23:07:48.774225] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.774253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.774298] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.774313] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.774368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6944091433278004830 len:24139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.774382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.774450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.774465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.774518] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:403726925824 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.774537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:52.251 #37 NEW cov: 11890 ft: 15086 corp: 22/1422b lim: 105 exec/s: 37 rss: 69Mb L: 105/105 MS: 1 ChangeBinInt- 00:07:52.251 [2024-11-17 23:07:48.814003] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446742978492891135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.814032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.814090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.814107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.251 #38 NEW cov: 11890 ft: 15122 corp: 23/1481b lim: 105 exec/s: 38 rss: 69Mb L: 59/105 MS: 1 InsertRepeatedBytes- 00:07:52.251 [2024-11-17 23:07:48.854110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446742978492891135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.854139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.251 [2024-11-17 23:07:48.854191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.251 [2024-11-17 23:07:48.854206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.510 #39 NEW cov: 11890 ft: 15141 corp: 24/1541b lim: 105 exec/s: 39 rss: 69Mb L: 60/105 MS: 1 InsertByte- 00:07:52.510 [2024-11-17 23:07:48.894107] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906364935209629238 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.894136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.510 #40 NEW cov: 11890 ft: 15165 corp: 25/1576b lim: 105 exec/s: 40 rss: 69Mb L: 35/105 MS: 1 ChangeBit- 00:07:52.510 [2024-11-17 23:07:48.934479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.934507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:48.934552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.934567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:48.934623] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6944091433278004830 len:24139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.934639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.510 #41 NEW cov: 11890 ft: 15206 corp: 26/1649b lim: 105 exec/s: 41 rss: 69Mb L: 73/105 MS: 1 EraseBytes- 00:07:52.510 [2024-11-17 23:07:48.974721] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.974749] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:48.974795] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.974810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:48.974863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.974880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:48.974933] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:48.974950] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.510 #42 NEW cov: 11890 ft: 15241 corp: 27/1745b lim: 105 exec/s: 42 rss: 70Mb L: 96/105 MS: 1 ShuffleBytes- 00:07:52.510 [2024-11-17 23:07:49.014744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:49.014772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:49.014810] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:49.014825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:49.014880] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976249474834016 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:49.014896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.510 #43 NEW cov: 11890 ft: 15258 corp: 28/1818b lim: 105 exec/s: 43 rss: 70Mb L: 73/105 MS: 1 CrossOver- 00:07:52.510 [2024-11-17 23:07:49.054941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:14106333700321166275 len:50116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:49.054969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:49.055014] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:14106333703424951235 len:50116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.510 [2024-11-17 23:07:49.055030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.510 [2024-11-17 23:07:49.055084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:14106333703424951235 len:50116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.511 [2024-11-17 23:07:49.055100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.511 [2024-11-17 23:07:49.055152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:14106333703424951235 len:50116 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.511 [2024-11-17 23:07:49.055168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.511 #45 NEW cov: 11890 ft: 15267 corp: 29/1912b lim: 105 exec/s: 45 rss: 70Mb L: 94/105 MS: 2 InsertByte-InsertRepeatedBytes- 00:07:52.511 [2024-11-17 23:07:49.094919] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.511 [2024-11-17 23:07:49.094947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.511 [2024-11-17 23:07:49.094988] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.511 [2024-11-17 23:07:49.095003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.511 [2024-11-17 23:07:49.095058] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:26562407212646400 len:18945 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.511 [2024-11-17 23:07:49.095073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.511 #46 NEW cov: 11890 ft: 15270 corp: 30/1986b lim: 105 exec/s: 46 rss: 70Mb L: 74/105 MS: 1 ChangeBinInt- 00:07:52.776 [2024-11-17 23:07:49.134825] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140340 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.134853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.776 #47 NEW cov: 11890 ft: 15279 corp: 31/2021b lim: 105 exec/s: 47 rss: 70Mb L: 35/105 MS: 1 ChangeBit- 00:07:52.776 [2024-11-17 23:07:49.175418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.175445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.776 [2024-11-17 23:07:49.175493] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.175509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.776 [2024-11-17 23:07:49.175565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6944091433278004830 len:24139 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.175581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.776 [2024-11-17 23:07:49.175634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.175647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.776 [2024-11-17 23:07:49.175702] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:403726925828 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.175717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:52.776 #48 NEW cov: 11890 ft: 15331 corp: 32/2126b lim: 105 exec/s: 48 rss: 70Mb L: 105/105 MS: 1 ChangeBit- 00:07:52.776 [2024-11-17 23:07:49.215038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.215066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.776 #49 NEW cov: 11890 ft: 15353 corp: 33/2153b lim: 105 exec/s: 49 rss: 70Mb L: 27/105 MS: 1 ShuffleBytes- 00:07:52.776 [2024-11-17 23:07:49.255538] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.255565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.776 [2024-11-17 23:07:49.255614] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.776 [2024-11-17 23:07:49.255630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.777 [2024-11-17 23:07:49.255681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.777 [2024-11-17 23:07:49.255697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.777 [2024-11-17 23:07:49.255750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.777 [2024-11-17 23:07:49.255765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.777 #50 NEW cov: 11890 ft: 15364 corp: 34/2249b lim: 105 exec/s: 50 rss: 70Mb L: 96/105 MS: 1 CopyPart- 00:07:52.777 [2024-11-17 23:07:49.295418] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.777 [2024-11-17 23:07:49.295446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.777 [2024-11-17 23:07:49.295499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.777 [2024-11-17 23:07:49.295514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.777 #51 NEW cov: 11890 ft: 15371 corp: 35/2294b lim: 105 exec/s: 51 rss: 70Mb L: 45/105 MS: 1 ChangeBit- 00:07:52.777 [2024-11-17 23:07:49.335816] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.777 [2024-11-17 23:07:49.335843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.777 [2024-11-17 23:07:49.335893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.335908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.778 [2024-11-17 23:07:49.335963] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.335979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.778 [2024-11-17 23:07:49.336034] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.336050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:52.778 #52 NEW cov: 11890 ft: 15381 corp: 36/2390b lim: 105 exec/s: 52 rss: 70Mb L: 96/105 MS: 1 ChangeBinInt- 00:07:52.778 [2024-11-17 23:07:49.375903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.375931] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:52.778 [2024-11-17 23:07:49.375975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.375991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:52.778 [2024-11-17 23:07:49.376044] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.376059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:52.778 [2024-11-17 23:07:49.376110] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744069435392767 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:52.778 [2024-11-17 23:07:49.376126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.038 #53 NEW cov: 11890 ft: 15409 corp: 37/2494b lim: 105 exec/s: 53 rss: 70Mb L: 104/105 MS: 1 CMP- DE: "\000\213\017_\320\001=\202"- 00:07:53.038 [2024-11-17 23:07:49.415757] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.415790] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.415847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.415863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.038 #54 NEW cov: 11890 ft: 15415 corp: 38/2548b lim: 105 exec/s: 54 rss: 70Mb L: 54/105 MS: 1 EraseBytes- 00:07:53.038 [2024-11-17 23:07:49.456049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.456078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.456117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.456133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.456188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.456204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.038 #55 NEW cov: 11890 ft: 15433 corp: 39/2612b lim: 105 exec/s: 55 rss: 70Mb L: 64/105 MS: 1 CrossOver- 00:07:53.038 [2024-11-17 23:07:49.496250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.496280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.496318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.496333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.496388] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779207262 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.496403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.496458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.496473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.038 #56 NEW cov: 11890 ft: 15442 corp: 40/2708b lim: 105 exec/s: 56 rss: 70Mb L: 96/105 MS: 1 CrossOver- 00:07:53.038 [2024-11-17 23:07:49.536106] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.536135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.536190] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:3906369333256140342 len:13879 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.536205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.038 #57 NEW cov: 11890 ft: 15478 corp: 41/2760b lim: 105 exec/s: 57 rss: 70Mb L: 52/105 MS: 1 ChangeBinInt- 00:07:53.038 [2024-11-17 23:07:49.576482] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.576510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.576560] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18426680980094844927 len:47289 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.576575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.576628] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:6799976246779338334 len:24159 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.576644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.576696] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.576712] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.038 #58 NEW cov: 11890 ft: 15501 corp: 42/2863b lim: 105 exec/s: 58 rss: 70Mb L: 103/105 MS: 1 InsertRepeatedBytes- 00:07:53.038 [2024-11-17 23:07:49.616603] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.616632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.616673] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.616687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.038 [2024-11-17 23:07:49.616740] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.038 [2024-11-17 23:07:49.616755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.039 [2024-11-17 23:07:49.616811] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.039 [2024-11-17 23:07:49.616827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.039 #59 NEW cov: 11890 ft: 15508 corp: 43/2962b lim: 105 exec/s: 59 rss: 70Mb L: 99/105 MS: 1 InsertRepeatedBytes- 00:07:53.297 [2024-11-17 23:07:49.656736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:18446744069599133695 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.297 [2024-11-17 23:07:49.656764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:53.297 [2024-11-17 23:07:49.656808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.297 [2024-11-17 23:07:49.656823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:53.298 [2024-11-17 23:07:49.656878] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.298 [2024-11-17 23:07:49.656894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:53.298 [2024-11-17 23:07:49.656952] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:53.298 [2024-11-17 23:07:49.656966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:53.298 #60 NEW cov: 11890 ft: 15553 corp: 44/3059b lim: 105 exec/s: 30 rss: 70Mb L: 97/105 MS: 1 CrossOver- 00:07:53.298 #60 DONE cov: 11890 ft: 15553 corp: 44/3059b lim: 105 exec/s: 30 rss: 70Mb 00:07:53.298 ###### Recommended dictionary. ###### 00:07:53.298 "\000\213\017_\320\001=\202" # Uses: 0 00:07:53.298 ###### End of recommended dictionary. ###### 00:07:53.298 Done 60 runs in 2 second(s) 00:07:53.298 23:07:49 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_16.conf 00:07:53.298 23:07:49 -- ../common.sh@72 -- # (( i++ )) 00:07:53.298 23:07:49 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:53.298 23:07:49 -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:07:53.298 23:07:49 -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:07:53.298 23:07:49 -- nvmf/run.sh@24 -- # local timen=1 00:07:53.298 23:07:49 -- nvmf/run.sh@25 -- # local core=0x1 00:07:53.298 23:07:49 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:53.298 23:07:49 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:07:53.298 23:07:49 -- nvmf/run.sh@29 -- # printf %02d 17 00:07:53.298 23:07:49 -- nvmf/run.sh@29 -- # port=4417 00:07:53.298 23:07:49 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:53.298 23:07:49 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:07:53.298 23:07:49 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:53.298 23:07:49 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 -r /var/tmp/spdk17.sock 00:07:53.298 [2024-11-17 23:07:49.815173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:53.298 [2024-11-17 23:07:49.815225] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304424 ] 00:07:53.298 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.556 [2024-11-17 23:07:49.994735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.556 [2024-11-17 23:07:50.065326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.556 [2024-11-17 23:07:50.065469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.556 [2024-11-17 23:07:50.124028] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.556 [2024-11-17 23:07:50.140371] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:07:53.556 INFO: Running with entropic power schedule (0xFF, 100). 00:07:53.556 INFO: Seed: 3703510866 00:07:53.815 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:53.815 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:53.815 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:07:53.815 INFO: A corpus is not provided, starting from an empty corpus 00:07:53.815 #2 INITED exec/s: 0 rss: 61Mb 00:07:53.815 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:53.815 This may also happen if the target rejected all inputs we tried so far 00:07:53.815 [2024-11-17 23:07:50.188788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070069616639 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:53.815 [2024-11-17 23:07:50.188820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.074 NEW_FUNC[1/671]: 0x4545f8 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:07:54.074 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:54.074 #11 NEW cov: 11683 ft: 11685 corp: 2/30b lim: 120 exec/s: 0 rss: 68Mb L: 29/29 MS: 4 ShuffleBytes-CopyPart-ChangeByte-InsertRepeatedBytes- 00:07:54.074 [2024-11-17 23:07:50.509575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070069616639 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.074 [2024-11-17 23:07:50.509608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.074 NEW_FUNC[1/1]: 0xedf368 in rte_rdtsc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/rte_cycles.h:31 00:07:54.074 #12 NEW cov: 11797 ft: 12068 corp: 3/59b lim: 120 exec/s: 0 rss: 68Mb L: 29/29 MS: 1 ShuffleBytes- 00:07:54.074 [2024-11-17 23:07:50.559648] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.074 [2024-11-17 23:07:50.559678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.074 #13 NEW cov: 11803 ft: 12442 corp: 4/89b lim: 120 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 CrossOver- 00:07:54.074 [2024-11-17 23:07:50.599756] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.074 [2024-11-17 23:07:50.599786] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.074 #14 NEW cov: 11888 ft: 12703 corp: 5/119b lim: 120 exec/s: 0 rss: 68Mb L: 30/30 MS: 1 ChangeByte- 00:07:54.074 [2024-11-17 23:07:50.639863] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.074 [2024-11-17 23:07:50.639892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.074 #20 NEW cov: 11888 ft: 12798 corp: 6/151b lim: 120 exec/s: 0 rss: 69Mb L: 32/32 MS: 1 CrossOver- 00:07:54.074 [2024-11-17 23:07:50.680144] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.074 [2024-11-17 23:07:50.680172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.074 [2024-11-17 23:07:50.680226] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.074 [2024-11-17 23:07:50.680241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.333 #23 NEW cov: 11888 ft: 13710 corp: 7/203b lim: 120 exec/s: 0 rss: 69Mb L: 52/52 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:07:54.333 [2024-11-17 23:07:50.720102] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.333 [2024-11-17 23:07:50.720129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.333 #29 NEW cov: 11888 ft: 13739 corp: 8/232b lim: 120 exec/s: 0 rss: 69Mb L: 29/52 MS: 1 ChangeBinInt- 00:07:54.333 [2024-11-17 23:07:50.760216] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2813342388112588799 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.333 [2024-11-17 23:07:50.760243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.333 #30 NEW cov: 11888 ft: 13752 corp: 9/267b lim: 120 exec/s: 0 rss: 69Mb L: 35/52 MS: 1 CrossOver- 00:07:54.333 [2024-11-17 23:07:50.800322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.333 [2024-11-17 23:07:50.800349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.333 #31 NEW cov: 11888 ft: 13875 corp: 10/304b lim: 120 exec/s: 0 rss: 69Mb L: 37/52 MS: 1 CrossOver- 00:07:54.333 [2024-11-17 23:07:50.840487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744070069616639 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.333 [2024-11-17 23:07:50.840514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.333 #32 NEW cov: 11888 ft: 13956 corp: 11/334b lim: 120 exec/s: 0 rss: 69Mb L: 30/52 MS: 1 InsertByte- 00:07:54.333 [2024-11-17 23:07:50.880544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.334 [2024-11-17 23:07:50.880572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.334 #33 NEW cov: 11888 ft: 14080 corp: 12/363b lim: 120 exec/s: 0 rss: 69Mb L: 29/52 MS: 1 CrossOver- 00:07:54.334 [2024-11-17 23:07:50.920868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446505523290966783 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.334 [2024-11-17 23:07:50.920895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.334 [2024-11-17 23:07:50.920939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:124554051584 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.334 [2024-11-17 23:07:50.920953] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.592 #34 NEW cov: 11888 ft: 14126 corp: 13/427b lim: 120 exec/s: 0 rss: 69Mb L: 64/64 MS: 1 CrossOver- 00:07:54.592 [2024-11-17 23:07:50.961271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446505523290966783 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:50.961300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.592 [2024-11-17 23:07:50.961337] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:50.961353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.592 [2024-11-17 23:07:50.961407] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:50.961423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.592 [2024-11-17 23:07:50.961477] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069416550399 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:50.961492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.592 #35 NEW cov: 11888 ft: 14659 corp: 14/533b lim: 120 exec/s: 0 rss: 69Mb L: 106/106 MS: 1 InsertRepeatedBytes- 00:07:54.592 [2024-11-17 23:07:51.010960] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873279 len:48385 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:51.010989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.592 #36 NEW cov: 11888 ft: 14684 corp: 15/563b lim: 120 exec/s: 0 rss: 69Mb L: 30/106 MS: 1 InsertByte- 00:07:54.592 [2024-11-17 23:07:51.051191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462775481532415 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:51.051220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.592 [2024-11-17 23:07:51.051273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:51.051290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.592 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:54.592 #41 NEW cov: 11911 ft: 14699 corp: 16/618b lim: 120 exec/s: 0 rss: 69Mb L: 55/106 MS: 5 EraseBytes-ChangeByte-ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:07:54.592 [2024-11-17 23:07:51.091297] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427652052418559 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:51.091325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.592 [2024-11-17 23:07:51.091378] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.592 [2024-11-17 23:07:51.091393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.592 #42 NEW cov: 11911 ft: 14716 corp: 17/684b lim: 120 exec/s: 0 rss: 69Mb L: 66/106 MS: 1 InsertRepeatedBytes- 00:07:54.593 [2024-11-17 23:07:51.131266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.593 [2024-11-17 23:07:51.131294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.593 #43 NEW cov: 11911 ft: 14791 corp: 18/713b lim: 120 exec/s: 0 rss: 69Mb L: 29/106 MS: 1 ChangeByte- 00:07:54.593 [2024-11-17 23:07:51.171434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.593 [2024-11-17 23:07:51.171462] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.593 #44 NEW cov: 11911 ft: 14806 corp: 19/745b lim: 120 exec/s: 44 rss: 69Mb L: 32/106 MS: 1 ChangeByte- 00:07:54.851 [2024-11-17 23:07:51.211548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18374686480326655999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.851 [2024-11-17 23:07:51.211576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.851 #45 NEW cov: 11911 ft: 14819 corp: 20/774b lim: 120 exec/s: 45 rss: 69Mb L: 29/106 MS: 1 ChangeBinInt- 00:07:54.851 [2024-11-17 23:07:51.251975] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873279 len:48385 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.851 [2024-11-17 23:07:51.252004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.851 [2024-11-17 23:07:51.252047] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.851 [2024-11-17 23:07:51.252062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.851 [2024-11-17 23:07:51.252115] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.851 [2024-11-17 23:07:51.252131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.851 #46 NEW cov: 11911 ft: 15137 corp: 21/869b lim: 120 exec/s: 46 rss: 69Mb L: 95/106 MS: 1 InsertRepeatedBytes- 00:07:54.851 [2024-11-17 23:07:51.291923] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427652052418559 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.851 [2024-11-17 23:07:51.291951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.851 [2024-11-17 23:07:51.292004] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.851 [2024-11-17 23:07:51.292020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.851 #47 NEW cov: 11911 ft: 15154 corp: 22/936b lim: 120 exec/s: 47 rss: 69Mb L: 67/106 MS: 1 InsertByte- 00:07:54.851 [2024-11-17 23:07:51.331991] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.332019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.852 #48 NEW cov: 11911 ft: 15184 corp: 23/973b lim: 120 exec/s: 48 rss: 70Mb L: 37/106 MS: 1 CMP- DE: "\336\307 \002\000\000\000\000"- 00:07:54.852 [2024-11-17 23:07:51.372191] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446505523290966783 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.372219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.852 [2024-11-17 23:07:51.372269] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:124554051584 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.372285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.852 #49 NEW cov: 11911 ft: 15278 corp: 24/1037b lim: 120 exec/s: 49 rss: 70Mb L: 64/106 MS: 1 ChangeByte- 00:07:54.852 [2024-11-17 23:07:51.412548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:184549375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.412577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:54.852 [2024-11-17 23:07:51.412625] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.412641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:54.852 [2024-11-17 23:07:51.412693] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.412709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:54.852 [2024-11-17 23:07:51.412762] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.412774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:54.852 #53 NEW cov: 11911 ft: 15287 corp: 25/1153b lim: 120 exec/s: 53 rss: 70Mb L: 116/116 MS: 4 EraseBytes-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:07:54.852 [2024-11-17 23:07:51.452224] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65293 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:54.852 [2024-11-17 23:07:51.452253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 #54 NEW cov: 11911 ft: 15390 corp: 26/1183b lim: 120 exec/s: 54 rss: 70Mb L: 30/116 MS: 1 CMP- DE: "\014\336\306\004a\017\213\000"- 00:07:55.112 [2024-11-17 23:07:51.492639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427652052418559 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.492667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.492704] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:4278190080 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.492720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.492772] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:11574427654092267680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.492787] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.112 #55 NEW cov: 11911 ft: 15400 corp: 27/1276b lim: 120 exec/s: 55 rss: 70Mb L: 93/116 MS: 1 CrossOver- 00:07:55.112 [2024-11-17 23:07:51.532789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873279 len:48385 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.532815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.532855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.532870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.532921] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.532937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.112 #56 NEW cov: 11911 ft: 15410 corp: 28/1371b lim: 120 exec/s: 56 rss: 70Mb L: 95/116 MS: 1 ChangeBinInt- 00:07:55.112 [2024-11-17 23:07:51.573017] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446505523290966783 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.573044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.573086] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:1302123115061580306 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.573101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.573154] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:1302123111085380114 len:4627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.573169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.573221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18446744069416550399 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.573237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.112 #57 NEW cov: 11911 ft: 15433 corp: 29/1477b lim: 120 exec/s: 57 rss: 70Mb L: 106/116 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:55.112 [2024-11-17 23:07:51.622844] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427652052418559 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.622872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.622926] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.622942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.112 #58 NEW cov: 11911 ft: 15444 corp: 30/1543b lim: 120 exec/s: 58 rss: 70Mb L: 66/116 MS: 1 ShuffleBytes- 00:07:55.112 [2024-11-17 23:07:51.663376] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:184549375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.663404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.663453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.663468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.663522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.663542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.663594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.663609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.663662] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:0 lba:0 len:2816 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.663677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:55.112 #59 NEW cov: 11911 ft: 15509 corp: 31/1663b lim: 120 exec/s: 59 rss: 70Mb L: 120/120 MS: 1 InsertRepeatedBytes- 00:07:55.112 [2024-11-17 23:07:51.713116] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427652052418559 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.713144] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.112 [2024-11-17 23:07:51.713196] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.112 [2024-11-17 23:07:51.713213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.371 #60 NEW cov: 11911 ft: 15521 corp: 32/1713b lim: 120 exec/s: 60 rss: 70Mb L: 50/120 MS: 1 CrossOver- 00:07:55.371 [2024-11-17 23:07:51.753207] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574427652052418559 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.371 [2024-11-17 23:07:51.753236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.371 [2024-11-17 23:07:51.753287] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427654092267680 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.371 [2024-11-17 23:07:51.753303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.371 #61 NEW cov: 11911 ft: 15549 corp: 33/1780b lim: 120 exec/s: 61 rss: 70Mb L: 67/120 MS: 1 InsertByte- 00:07:55.371 [2024-11-17 23:07:51.793665] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462775481532415 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.371 [2024-11-17 23:07:51.793694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.372 [2024-11-17 23:07:51.793730] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.793746] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.372 [2024-11-17 23:07:51.793799] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:18445613754583351296 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.793815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.372 [2024-11-17 23:07:51.793866] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:0 lba:18157383382357244923 len:64508 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.793881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:55.372 #62 NEW cov: 11911 ft: 15568 corp: 34/1894b lim: 120 exec/s: 62 rss: 70Mb L: 114/120 MS: 1 InsertRepeatedBytes- 00:07:55.372 [2024-11-17 23:07:51.833289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65293 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.833316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.372 #63 NEW cov: 11911 ft: 15626 corp: 35/1924b lim: 120 exec/s: 63 rss: 70Mb L: 30/120 MS: 1 ChangeBinInt- 00:07:55.372 [2024-11-17 23:07:51.873567] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462775481532415 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.873594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.372 [2024-11-17 23:07:51.873647] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:2965947086361143593 len:10538 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.873663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.372 #64 NEW cov: 11911 ft: 15665 corp: 36/1980b lim: 120 exec/s: 64 rss: 70Mb L: 56/120 MS: 1 InsertByte- 00:07:55.372 [2024-11-17 23:07:51.913806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873279 len:48385 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.913834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.372 [2024-11-17 23:07:51.913871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.913887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.372 [2024-11-17 23:07:51.913942] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.913958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.372 #65 NEW cov: 11911 ft: 15679 corp: 37/2073b lim: 120 exec/s: 65 rss: 70Mb L: 93/120 MS: 1 EraseBytes- 00:07:55.372 [2024-11-17 23:07:51.953654] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446736373003520767 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.372 [2024-11-17 23:07:51.953682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.372 #66 NEW cov: 11911 ft: 15712 corp: 38/2103b lim: 120 exec/s: 66 rss: 70Mb L: 30/120 MS: 1 ChangeBinInt- 00:07:55.631 [2024-11-17 23:07:51.993768] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65293 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.631 [2024-11-17 23:07:51.993796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.631 #67 NEW cov: 11911 ft: 15727 corp: 39/2134b lim: 120 exec/s: 67 rss: 70Mb L: 31/120 MS: 1 InsertByte- 00:07:55.631 [2024-11-17 23:07:52.033864] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446744069584915199 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.631 [2024-11-17 23:07:52.033893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.631 #68 NEW cov: 11911 ft: 15739 corp: 40/2165b lim: 120 exec/s: 68 rss: 70Mb L: 31/120 MS: 1 InsertByte- 00:07:55.631 [2024-11-17 23:07:52.074264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:11574426964857651199 len:24928 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.631 [2024-11-17 23:07:52.074293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.631 [2024-11-17 23:07:52.074329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:11574427963329912992 len:41121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.631 [2024-11-17 23:07:52.074345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:55.631 [2024-11-17 23:07:52.074398] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:2694905855 len:30 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.631 [2024-11-17 23:07:52.074415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:55.631 #69 NEW cov: 11911 ft: 15770 corp: 41/2240b lim: 120 exec/s: 69 rss: 70Mb L: 75/120 MS: 1 CMP- DE: "\000\213\017a_\307\353\014"- 00:07:55.631 [2024-11-17 23:07:52.124138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:18446462599387873220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.631 [2024-11-17 23:07:52.124165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.631 #70 NEW cov: 11911 ft: 15790 corp: 42/2270b lim: 120 exec/s: 70 rss: 70Mb L: 30/120 MS: 1 InsertByte- 00:07:55.631 [2024-11-17 23:07:52.164236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:2813342388112588799 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:07:55.632 [2024-11-17 23:07:52.164263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:55.632 #71 NEW cov: 11911 ft: 15804 corp: 43/2309b lim: 120 exec/s: 35 rss: 70Mb L: 39/120 MS: 1 CMP- DE: "\001\000\002\000"- 00:07:55.632 #71 DONE cov: 11911 ft: 15804 corp: 43/2309b lim: 120 exec/s: 35 rss: 70Mb 00:07:55.632 ###### Recommended dictionary. ###### 00:07:55.632 "\336\307 \002\000\000\000\000" # Uses: 0 00:07:55.632 "\014\336\306\004a\017\213\000" # Uses: 0 00:07:55.632 "\377\377\377\377\377\377\377\377" # Uses: 0 00:07:55.632 "\000\213\017a_\307\353\014" # Uses: 0 00:07:55.632 "\001\000\002\000" # Uses: 0 00:07:55.632 ###### End of recommended dictionary. ###### 00:07:55.632 Done 71 runs in 2 second(s) 00:07:55.891 23:07:52 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_17.conf 00:07:55.891 23:07:52 -- ../common.sh@72 -- # (( i++ )) 00:07:55.891 23:07:52 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:55.891 23:07:52 -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:07:55.891 23:07:52 -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:07:55.891 23:07:52 -- nvmf/run.sh@24 -- # local timen=1 00:07:55.891 23:07:52 -- nvmf/run.sh@25 -- # local core=0x1 00:07:55.891 23:07:52 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:55.891 23:07:52 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:07:55.891 23:07:52 -- nvmf/run.sh@29 -- # printf %02d 18 00:07:55.891 23:07:52 -- nvmf/run.sh@29 -- # port=4418 00:07:55.891 23:07:52 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:55.891 23:07:52 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:07:55.891 23:07:52 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:55.891 23:07:52 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 -r /var/tmp/spdk18.sock 00:07:55.891 [2024-11-17 23:07:52.347704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:55.891 [2024-11-17 23:07:52.347795] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304788 ] 00:07:55.891 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.151 [2024-11-17 23:07:52.531361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.151 [2024-11-17 23:07:52.595326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.151 [2024-11-17 23:07:52.595459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.151 [2024-11-17 23:07:52.653446] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.151 [2024-11-17 23:07:52.669795] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:07:56.151 INFO: Running with entropic power schedule (0xFF, 100). 00:07:56.151 INFO: Seed: 1938523507 00:07:56.151 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:56.151 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:56.151 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:07:56.151 INFO: A corpus is not provided, starting from an empty corpus 00:07:56.151 #2 INITED exec/s: 0 rss: 60Mb 00:07:56.151 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:56.151 This may also happen if the target rejected all inputs we tried so far 00:07:56.151 [2024-11-17 23:07:52.719075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.151 [2024-11-17 23:07:52.719104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.151 [2024-11-17 23:07:52.719139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.151 [2024-11-17 23:07:52.719155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.151 [2024-11-17 23:07:52.719205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.151 [2024-11-17 23:07:52.719219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.151 [2024-11-17 23:07:52.719270] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.151 [2024-11-17 23:07:52.719285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.410 NEW_FUNC[1/670]: 0x457e58 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:07:56.410 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:56.410 #11 NEW cov: 11622 ft: 11629 corp: 2/82b lim: 100 exec/s: 0 rss: 68Mb L: 81/81 MS: 4 ChangeByte-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:07:56.410 [2024-11-17 23:07:53.019799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.410 [2024-11-17 23:07:53.019835] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.410 [2024-11-17 23:07:53.019885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.410 [2024-11-17 23:07:53.019899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.410 [2024-11-17 23:07:53.019948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.410 [2024-11-17 23:07:53.019963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.410 [2024-11-17 23:07:53.020012] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.410 [2024-11-17 23:07:53.020025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.669 #12 NEW cov: 11741 ft: 12104 corp: 3/163b lim: 100 exec/s: 0 rss: 68Mb L: 81/81 MS: 1 ChangeBinInt- 00:07:56.669 [2024-11-17 23:07:53.069849] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.669 [2024-11-17 23:07:53.069875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.669 [2024-11-17 23:07:53.069913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.669 [2024-11-17 23:07:53.069927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.669 [2024-11-17 23:07:53.069975] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.669 [2024-11-17 23:07:53.069989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.669 [2024-11-17 23:07:53.070038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.669 [2024-11-17 23:07:53.070050] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.669 #13 NEW cov: 11747 ft: 12394 corp: 4/244b lim: 100 exec/s: 0 rss: 68Mb L: 81/81 MS: 1 ChangeByte- 00:07:56.669 [2024-11-17 23:07:53.109943] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.669 [2024-11-17 23:07:53.109971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.669 [2024-11-17 23:07:53.110014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.669 [2024-11-17 23:07:53.110028] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.669 [2024-11-17 23:07:53.110077] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.669 [2024-11-17 23:07:53.110091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.669 [2024-11-17 23:07:53.110141] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.670 [2024-11-17 23:07:53.110155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.670 #14 NEW cov: 11832 ft: 12634 corp: 5/342b lim: 100 exec/s: 0 rss: 68Mb L: 98/98 MS: 1 CopyPart- 00:07:56.670 [2024-11-17 23:07:53.150060] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.670 [2024-11-17 23:07:53.150086] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.150121] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.670 [2024-11-17 23:07:53.150134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.150189] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.670 [2024-11-17 23:07:53.150204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.150254] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.670 [2024-11-17 23:07:53.150268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.670 #15 NEW cov: 11832 ft: 12706 corp: 6/440b lim: 100 exec/s: 0 rss: 68Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:56.670 [2024-11-17 23:07:53.190166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.670 [2024-11-17 23:07:53.190192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.190233] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.670 [2024-11-17 23:07:53.190247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.190278] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.670 [2024-11-17 23:07:53.190292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.190340] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.670 [2024-11-17 23:07:53.190354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.670 #16 NEW cov: 11832 ft: 12754 corp: 7/538b lim: 100 exec/s: 0 rss: 68Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:56.670 [2024-11-17 23:07:53.230192] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.670 [2024-11-17 23:07:53.230217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.230253] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.670 [2024-11-17 23:07:53.230268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.230317] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.670 [2024-11-17 23:07:53.230332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.670 #17 NEW cov: 11832 ft: 13068 corp: 8/614b lim: 100 exec/s: 0 rss: 68Mb L: 76/98 MS: 1 EraseBytes- 00:07:56.670 [2024-11-17 23:07:53.270415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.670 [2024-11-17 23:07:53.270441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.270485] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.670 [2024-11-17 23:07:53.270499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.270550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.670 [2024-11-17 23:07:53.270565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.670 [2024-11-17 23:07:53.270614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.670 [2024-11-17 23:07:53.270629] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.929 #18 NEW cov: 11832 ft: 13109 corp: 9/712b lim: 100 exec/s: 0 rss: 68Mb L: 98/98 MS: 1 InsertRepeatedBytes- 00:07:56.929 [2024-11-17 23:07:53.310508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.929 [2024-11-17 23:07:53.310538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.929 [2024-11-17 23:07:53.310583] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.929 [2024-11-17 23:07:53.310598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.929 [2024-11-17 23:07:53.310647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.929 [2024-11-17 23:07:53.310661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.929 [2024-11-17 23:07:53.310710] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.929 [2024-11-17 23:07:53.310724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.929 #19 NEW cov: 11832 ft: 13180 corp: 10/800b lim: 100 exec/s: 0 rss: 68Mb L: 88/98 MS: 1 CrossOver- 00:07:56.929 [2024-11-17 23:07:53.350661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.929 [2024-11-17 23:07:53.350689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.929 [2024-11-17 23:07:53.350729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.929 [2024-11-17 23:07:53.350743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.929 [2024-11-17 23:07:53.350794] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.929 [2024-11-17 23:07:53.350808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.929 [2024-11-17 23:07:53.350861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.930 [2024-11-17 23:07:53.350876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.930 #20 NEW cov: 11832 ft: 13225 corp: 11/881b lim: 100 exec/s: 0 rss: 68Mb L: 81/98 MS: 1 EraseBytes- 00:07:56.930 [2024-11-17 23:07:53.390787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.930 [2024-11-17 23:07:53.390813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.390858] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.930 [2024-11-17 23:07:53.390872] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.390921] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.930 [2024-11-17 23:07:53.390936] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.390985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.930 [2024-11-17 23:07:53.390999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.930 #21 NEW cov: 11832 ft: 13267 corp: 12/979b lim: 100 exec/s: 0 rss: 69Mb L: 98/98 MS: 1 ChangeByte- 00:07:56.930 [2024-11-17 23:07:53.430886] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.930 [2024-11-17 23:07:53.430912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.430969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.930 [2024-11-17 23:07:53.430984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.431033] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.930 [2024-11-17 23:07:53.431049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.431097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.930 [2024-11-17 23:07:53.431111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.930 #22 NEW cov: 11832 ft: 13298 corp: 13/1070b lim: 100 exec/s: 0 rss: 69Mb L: 91/98 MS: 1 CopyPart- 00:07:56.930 [2024-11-17 23:07:53.470985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.930 [2024-11-17 23:07:53.471011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.471056] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.930 [2024-11-17 23:07:53.471069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.471120] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.930 [2024-11-17 23:07:53.471134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.471185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.930 [2024-11-17 23:07:53.471199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.930 #23 NEW cov: 11832 ft: 13328 corp: 14/1158b lim: 100 exec/s: 0 rss: 69Mb L: 88/98 MS: 1 ChangeBinInt- 00:07:56.930 [2024-11-17 23:07:53.511109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:56.930 [2024-11-17 23:07:53.511134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.511180] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:56.930 [2024-11-17 23:07:53.511194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.511244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:56.930 [2024-11-17 23:07:53.511259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:56.930 [2024-11-17 23:07:53.511308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:56.930 [2024-11-17 23:07:53.511321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:56.930 #24 NEW cov: 11832 ft: 13346 corp: 15/1239b lim: 100 exec/s: 0 rss: 69Mb L: 81/98 MS: 1 ChangeBit- 00:07:57.189 [2024-11-17 23:07:53.551219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.189 [2024-11-17 23:07:53.551244] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.551291] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.189 [2024-11-17 23:07:53.551304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.551355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.189 [2024-11-17 23:07:53.551369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.551419] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.189 [2024-11-17 23:07:53.551433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.189 #25 NEW cov: 11832 ft: 13477 corp: 16/1327b lim: 100 exec/s: 0 rss: 69Mb L: 88/98 MS: 1 ChangeBit- 00:07:57.189 [2024-11-17 23:07:53.591357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.189 [2024-11-17 23:07:53.591383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.591418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.189 [2024-11-17 23:07:53.591432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.591482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.189 [2024-11-17 23:07:53.591497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.591548] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.189 [2024-11-17 23:07:53.591562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.189 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:57.189 #26 NEW cov: 11855 ft: 13520 corp: 17/1408b lim: 100 exec/s: 0 rss: 69Mb L: 81/98 MS: 1 ChangeByte- 00:07:57.189 [2024-11-17 23:07:53.631490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.189 [2024-11-17 23:07:53.631515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.631565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.189 [2024-11-17 23:07:53.631580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.631629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.189 [2024-11-17 23:07:53.631643] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.631693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.189 [2024-11-17 23:07:53.631706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.189 #27 NEW cov: 11855 ft: 13530 corp: 18/1506b lim: 100 exec/s: 0 rss: 69Mb L: 98/98 MS: 1 ChangeBinInt- 00:07:57.189 [2024-11-17 23:07:53.671505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.189 [2024-11-17 23:07:53.671531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.671572] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.189 [2024-11-17 23:07:53.671585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.671637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.189 [2024-11-17 23:07:53.671654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.189 #28 NEW cov: 11855 ft: 13537 corp: 19/1582b lim: 100 exec/s: 0 rss: 69Mb L: 76/98 MS: 1 CrossOver- 00:07:57.189 [2024-11-17 23:07:53.711394] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.189 [2024-11-17 23:07:53.711419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.189 #29 NEW cov: 11855 ft: 14006 corp: 20/1607b lim: 100 exec/s: 29 rss: 69Mb L: 25/98 MS: 1 CrossOver- 00:07:57.189 [2024-11-17 23:07:53.751843] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.189 [2024-11-17 23:07:53.751869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.189 [2024-11-17 23:07:53.751910] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.190 [2024-11-17 23:07:53.751925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.190 [2024-11-17 23:07:53.751977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.190 [2024-11-17 23:07:53.751991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.190 [2024-11-17 23:07:53.752042] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.190 [2024-11-17 23:07:53.752056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.190 #30 NEW cov: 11855 ft: 14033 corp: 21/1695b lim: 100 exec/s: 30 rss: 69Mb L: 88/98 MS: 1 ChangeByte- 00:07:57.190 [2024-11-17 23:07:53.791746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.190 [2024-11-17 23:07:53.791772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.190 [2024-11-17 23:07:53.791807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.190 [2024-11-17 23:07:53.791820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 #31 NEW cov: 11855 ft: 14289 corp: 22/1745b lim: 100 exec/s: 31 rss: 69Mb L: 50/98 MS: 1 EraseBytes- 00:07:57.449 [2024-11-17 23:07:53.832082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.449 [2024-11-17 23:07:53.832107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.832146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.449 [2024-11-17 23:07:53.832160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.832211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.449 [2024-11-17 23:07:53.832224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.832275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.449 [2024-11-17 23:07:53.832291] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.449 #32 NEW cov: 11855 ft: 14309 corp: 23/1826b lim: 100 exec/s: 32 rss: 69Mb L: 81/98 MS: 1 ChangeBit- 00:07:57.449 [2024-11-17 23:07:53.872196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.449 [2024-11-17 23:07:53.872222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.872261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.449 [2024-11-17 23:07:53.872275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.872326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.449 [2024-11-17 23:07:53.872340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.872392] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.449 [2024-11-17 23:07:53.872406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.449 #33 NEW cov: 11855 ft: 14328 corp: 24/1923b lim: 100 exec/s: 33 rss: 69Mb L: 97/98 MS: 1 InsertRepeatedBytes- 00:07:57.449 [2024-11-17 23:07:53.912308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.449 [2024-11-17 23:07:53.912333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.912373] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.449 [2024-11-17 23:07:53.912387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.912437] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.449 [2024-11-17 23:07:53.912451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.912500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.449 [2024-11-17 23:07:53.912514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.449 #34 NEW cov: 11855 ft: 14344 corp: 25/2021b lim: 100 exec/s: 34 rss: 69Mb L: 98/98 MS: 1 CMP- DE: "\001\000\000\000"- 00:07:57.449 [2024-11-17 23:07:53.952424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.449 [2024-11-17 23:07:53.952449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.952492] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.449 [2024-11-17 23:07:53.952507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.952559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.449 [2024-11-17 23:07:53.952574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.952625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.449 [2024-11-17 23:07:53.952639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.449 #35 NEW cov: 11855 ft: 14353 corp: 26/2119b lim: 100 exec/s: 35 rss: 69Mb L: 98/98 MS: 1 ChangeBinInt- 00:07:57.449 [2024-11-17 23:07:53.992546] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.449 [2024-11-17 23:07:53.992573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.992619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.449 [2024-11-17 23:07:53.992633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.992686] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.449 [2024-11-17 23:07:53.992701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:53.992750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.449 [2024-11-17 23:07:53.992765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.449 #36 NEW cov: 11855 ft: 14361 corp: 27/2200b lim: 100 exec/s: 36 rss: 69Mb L: 81/98 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:57.449 [2024-11-17 23:07:54.032713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.449 [2024-11-17 23:07:54.032738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:54.032783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.449 [2024-11-17 23:07:54.032797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:54.032846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.449 [2024-11-17 23:07:54.032860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.449 [2024-11-17 23:07:54.032909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.449 [2024-11-17 23:07:54.032924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.449 #37 NEW cov: 11855 ft: 14428 corp: 28/2281b lim: 100 exec/s: 37 rss: 69Mb L: 81/98 MS: 1 ChangeByte- 00:07:57.708 [2024-11-17 23:07:54.072788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.708 [2024-11-17 23:07:54.072815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.708 [2024-11-17 23:07:54.072851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.072868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.072919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.072933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.072986] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.073002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.709 #38 NEW cov: 11855 ft: 14508 corp: 29/2378b lim: 100 exec/s: 38 rss: 70Mb L: 97/98 MS: 1 ChangeBinInt- 00:07:57.709 [2024-11-17 23:07:54.112942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.709 [2024-11-17 23:07:54.112969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.113014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.113029] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.113080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.113095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.113151] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.113166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.709 #39 NEW cov: 11855 ft: 14527 corp: 30/2459b lim: 100 exec/s: 39 rss: 70Mb L: 81/98 MS: 1 ChangeBit- 00:07:57.709 [2024-11-17 23:07:54.153032] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.709 [2024-11-17 23:07:54.153058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.153094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.153108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.153157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.153172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.153222] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.153237] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.709 #45 NEW cov: 11855 ft: 14534 corp: 31/2540b lim: 100 exec/s: 45 rss: 70Mb L: 81/98 MS: 1 CopyPart- 00:07:57.709 [2024-11-17 23:07:54.193134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.709 [2024-11-17 23:07:54.193161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.193198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.193213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.193263] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.193279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.193329] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.193344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.709 #46 NEW cov: 11855 ft: 14542 corp: 32/2629b lim: 100 exec/s: 46 rss: 70Mb L: 89/98 MS: 1 InsertByte- 00:07:57.709 [2024-11-17 23:07:54.233230] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.709 [2024-11-17 23:07:54.233258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.233296] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.233312] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.233362] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.233376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.233424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.233440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.709 #47 NEW cov: 11855 ft: 14546 corp: 33/2710b lim: 100 exec/s: 47 rss: 70Mb L: 81/98 MS: 1 CrossOver- 00:07:57.709 [2024-11-17 23:07:54.273321] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.709 [2024-11-17 23:07:54.273348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.273382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.273395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.273446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.273461] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.273513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.273525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.709 #48 NEW cov: 11855 ft: 14605 corp: 34/2791b lim: 100 exec/s: 48 rss: 70Mb L: 81/98 MS: 1 ChangeBinInt- 00:07:57.709 [2024-11-17 23:07:54.313448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.709 [2024-11-17 23:07:54.313474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.313517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.709 [2024-11-17 23:07:54.313537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.313588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.709 [2024-11-17 23:07:54.313601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.709 [2024-11-17 23:07:54.313653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.709 [2024-11-17 23:07:54.313667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.969 #49 NEW cov: 11855 ft: 14626 corp: 35/2882b lim: 100 exec/s: 49 rss: 70Mb L: 91/98 MS: 1 EraseBytes- 00:07:57.969 [2024-11-17 23:07:54.343487] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.969 [2024-11-17 23:07:54.343515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.343556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.969 [2024-11-17 23:07:54.343571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.343623] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.969 [2024-11-17 23:07:54.343640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.343689] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.969 [2024-11-17 23:07:54.343704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.969 #50 NEW cov: 11855 ft: 14636 corp: 36/2967b lim: 100 exec/s: 50 rss: 70Mb L: 85/98 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:57.969 [2024-11-17 23:07:54.383667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.969 [2024-11-17 23:07:54.383695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.383739] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.969 [2024-11-17 23:07:54.383755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.383807] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.969 [2024-11-17 23:07:54.383822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.383874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.969 [2024-11-17 23:07:54.383889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.969 #51 NEW cov: 11855 ft: 14650 corp: 37/3058b lim: 100 exec/s: 51 rss: 70Mb L: 91/98 MS: 1 CopyPart- 00:07:57.969 [2024-11-17 23:07:54.413729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.969 [2024-11-17 23:07:54.413757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.413795] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.969 [2024-11-17 23:07:54.413809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.413860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.969 [2024-11-17 23:07:54.413875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.413926] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.969 [2024-11-17 23:07:54.413939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.969 #52 NEW cov: 11855 ft: 14664 corp: 38/3138b lim: 100 exec/s: 52 rss: 70Mb L: 80/98 MS: 1 EraseBytes- 00:07:57.969 [2024-11-17 23:07:54.453882] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.969 [2024-11-17 23:07:54.453909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.453945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.969 [2024-11-17 23:07:54.453960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.454010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.969 [2024-11-17 23:07:54.454026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.454078] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.969 [2024-11-17 23:07:54.454091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.969 #53 NEW cov: 11855 ft: 14680 corp: 39/3236b lim: 100 exec/s: 53 rss: 70Mb L: 98/98 MS: 1 ShuffleBytes- 00:07:57.969 [2024-11-17 23:07:54.493954] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.969 [2024-11-17 23:07:54.493980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.494018] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.969 [2024-11-17 23:07:54.494032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.494086] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.969 [2024-11-17 23:07:54.494100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.494153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.969 [2024-11-17 23:07:54.494167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:57.969 #54 NEW cov: 11855 ft: 14683 corp: 40/3329b lim: 100 exec/s: 54 rss: 70Mb L: 93/98 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:57.969 [2024-11-17 23:07:54.533947] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.969 [2024-11-17 23:07:54.533973] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.534008] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.969 [2024-11-17 23:07:54.534022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.969 [2024-11-17 23:07:54.534075] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.969 [2024-11-17 23:07:54.534090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.969 #55 NEW cov: 11855 ft: 14687 corp: 41/3405b lim: 100 exec/s: 55 rss: 70Mb L: 76/98 MS: 1 ShuffleBytes- 00:07:57.970 [2024-11-17 23:07:54.574177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:57.970 [2024-11-17 23:07:54.574204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:57.970 [2024-11-17 23:07:54.574239] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:57.970 [2024-11-17 23:07:54.574254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:57.970 [2024-11-17 23:07:54.574304] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:57.970 [2024-11-17 23:07:54.574319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:57.970 [2024-11-17 23:07:54.574370] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:57.970 [2024-11-17 23:07:54.574384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.229 #56 NEW cov: 11855 ft: 14689 corp: 42/3490b lim: 100 exec/s: 56 rss: 70Mb L: 85/98 MS: 1 PersAutoDict- DE: "\001\000\000\000"- 00:07:58.229 [2024-11-17 23:07:54.614174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.229 [2024-11-17 23:07:54.614201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.229 [2024-11-17 23:07:54.614240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.229 [2024-11-17 23:07:54.614254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.229 [2024-11-17 23:07:54.614307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.229 [2024-11-17 23:07:54.614321] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.229 #57 NEW cov: 11855 ft: 14729 corp: 43/3566b lim: 100 exec/s: 57 rss: 70Mb L: 76/98 MS: 1 ChangeByte- 00:07:58.229 [2024-11-17 23:07:54.654115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.229 [2024-11-17 23:07:54.654140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.229 #61 NEW cov: 11855 ft: 14755 corp: 44/3586b lim: 100 exec/s: 61 rss: 70Mb L: 20/98 MS: 4 EraseBytes-EraseBytes-CrossOver-PersAutoDict- DE: "\001\000\000\000"- 00:07:58.229 [2024-11-17 23:07:54.694525] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:07:58.229 [2024-11-17 23:07:54.694555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.229 [2024-11-17 23:07:54.694595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:07:58.229 [2024-11-17 23:07:54.694609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:58.229 [2024-11-17 23:07:54.694661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:07:58.229 [2024-11-17 23:07:54.694675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:58.229 [2024-11-17 23:07:54.694726] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:07:58.229 [2024-11-17 23:07:54.694740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:58.229 #62 NEW cov: 11855 ft: 14774 corp: 45/3667b lim: 100 exec/s: 31 rss: 70Mb L: 81/98 MS: 1 CrossOver- 00:07:58.229 #62 DONE cov: 11855 ft: 14774 corp: 45/3667b lim: 100 exec/s: 31 rss: 70Mb 00:07:58.229 ###### Recommended dictionary. ###### 00:07:58.229 "\001\000\000\000" # Uses: 5 00:07:58.229 ###### End of recommended dictionary. ###### 00:07:58.229 Done 62 runs in 2 second(s) 00:07:58.229 23:07:54 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_18.conf 00:07:58.229 23:07:54 -- ../common.sh@72 -- # (( i++ )) 00:07:58.229 23:07:54 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:07:58.229 23:07:54 -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:07:58.489 23:07:54 -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:07:58.489 23:07:54 -- nvmf/run.sh@24 -- # local timen=1 00:07:58.489 23:07:54 -- nvmf/run.sh@25 -- # local core=0x1 00:07:58.489 23:07:54 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:58.489 23:07:54 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:07:58.489 23:07:54 -- nvmf/run.sh@29 -- # printf %02d 19 00:07:58.489 23:07:54 -- nvmf/run.sh@29 -- # port=4419 00:07:58.489 23:07:54 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:58.489 23:07:54 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:07:58.489 23:07:54 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:07:58.489 23:07:54 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 -r /var/tmp/spdk19.sock 00:07:58.489 [2024-11-17 23:07:54.866928] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:58.489 [2024-11-17 23:07:54.866979] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305325 ] 00:07:58.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.489 [2024-11-17 23:07:55.039094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.748 [2024-11-17 23:07:55.101838] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.748 [2024-11-17 23:07:55.101974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.748 [2024-11-17 23:07:55.159860] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.748 [2024-11-17 23:07:55.176195] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:07:58.748 INFO: Running with entropic power schedule (0xFF, 100). 00:07:58.748 INFO: Seed: 149553957 00:07:58.749 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:07:58.749 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:07:58.749 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:07:58.749 INFO: A corpus is not provided, starting from an empty corpus 00:07:58.749 #2 INITED exec/s: 0 rss: 60Mb 00:07:58.749 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:07:58.749 This may also happen if the target rejected all inputs we tried so far 00:07:58.749 [2024-11-17 23:07:55.224893] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:58.749 [2024-11-17 23:07:55.224924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:58.749 [2024-11-17 23:07:55.224980] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:58.749 [2024-11-17 23:07:55.224996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.009 NEW_FUNC[1/670]: 0x45ae18 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:07:59.009 NEW_FUNC[2/670]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:07:59.009 #6 NEW cov: 11606 ft: 11605 corp: 2/30b lim: 50 exec/s: 0 rss: 69Mb L: 29/29 MS: 4 CopyPart-CrossOver-CrossOver-InsertRepeatedBytes- 00:07:59.009 [2024-11-17 23:07:55.525782] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:59.009 [2024-11-17 23:07:55.525816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.009 [2024-11-17 23:07:55.525871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:59.009 [2024-11-17 23:07:55.525887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.009 [2024-11-17 23:07:55.525939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:59.009 [2024-11-17 23:07:55.525954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.009 #7 NEW cov: 11719 ft: 12377 corp: 3/63b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:07:59.009 [2024-11-17 23:07:55.565793] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:59.009 [2024-11-17 23:07:55.565823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.009 [2024-11-17 23:07:55.565861] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:59.009 [2024-11-17 23:07:55.565877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.009 [2024-11-17 23:07:55.565930] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:59.009 [2024-11-17 23:07:55.565945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.009 #8 NEW cov: 11725 ft: 12609 corp: 4/96b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:07:59.009 [2024-11-17 23:07:55.605792] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.009 [2024-11-17 23:07:55.605824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.009 [2024-11-17 23:07:55.605879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.009 [2024-11-17 23:07:55.605895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 #9 NEW cov: 11810 ft: 12944 corp: 5/125b lim: 50 exec/s: 0 rss: 69Mb L: 29/33 MS: 1 CopyPart- 00:07:59.270 [2024-11-17 23:07:55.645941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204128346984997088 len:57569 00:07:59.270 [2024-11-17 23:07:55.645969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.646021] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.270 [2024-11-17 23:07:55.646039] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 #10 NEW cov: 11810 ft: 13052 corp: 6/154b lim: 50 exec/s: 0 rss: 69Mb L: 29/33 MS: 1 ChangeBit- 00:07:59.270 [2024-11-17 23:07:55.686152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18380597454182547455 len:65536 00:07:59.270 [2024-11-17 23:07:55.686181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.686231] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:59.270 [2024-11-17 23:07:55.686247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.686302] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:59.270 [2024-11-17 23:07:55.686318] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.270 #11 NEW cov: 11810 ft: 13190 corp: 7/187b lim: 50 exec/s: 0 rss: 69Mb L: 33/33 MS: 1 ChangeByte- 00:07:59.270 [2024-11-17 23:07:55.726304] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10655763974808723680 len:57569 00:07:59.270 [2024-11-17 23:07:55.726332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.726368] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.270 [2024-11-17 23:07:55.726383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.726435] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:2571 00:07:59.270 [2024-11-17 23:07:55.726452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.270 #12 NEW cov: 11810 ft: 13254 corp: 8/217b lim: 50 exec/s: 0 rss: 69Mb L: 30/33 MS: 1 InsertByte- 00:07:59.270 [2024-11-17 23:07:55.766369] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10655763974808723680 len:57569 00:07:59.270 [2024-11-17 23:07:55.766398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.766434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.270 [2024-11-17 23:07:55.766453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.766506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:2571 00:07:59.270 [2024-11-17 23:07:55.766523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.270 #13 NEW cov: 11810 ft: 13292 corp: 9/247b lim: 50 exec/s: 0 rss: 69Mb L: 30/33 MS: 1 ShuffleBytes- 00:07:59.270 [2024-11-17 23:07:55.806675] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.270 [2024-11-17 23:07:55.806704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.806739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073187426303 len:65536 00:07:59.270 [2024-11-17 23:07:55.806754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.806806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:59.270 [2024-11-17 23:07:55.806820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.806875] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198716249268448 len:57569 00:07:59.270 [2024-11-17 23:07:55.806892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.270 #14 NEW cov: 11810 ft: 13567 corp: 10/295b lim: 50 exec/s: 0 rss: 69Mb L: 48/48 MS: 1 InsertRepeatedBytes- 00:07:59.270 [2024-11-17 23:07:55.846530] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.270 [2024-11-17 23:07:55.846562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.270 [2024-11-17 23:07:55.846615] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.270 [2024-11-17 23:07:55.846631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.270 #15 NEW cov: 11810 ft: 13589 corp: 11/324b lim: 50 exec/s: 0 rss: 69Mb L: 29/48 MS: 1 CopyPart- 00:07:59.530 [2024-11-17 23:07:55.886657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.530 [2024-11-17 23:07:55.886687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:55.886742] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.530 [2024-11-17 23:07:55.886758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.530 #16 NEW cov: 11810 ft: 13655 corp: 12/353b lim: 50 exec/s: 0 rss: 69Mb L: 29/48 MS: 1 ShuffleBytes- 00:07:59.530 [2024-11-17 23:07:55.927013] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10655763974808723680 len:57569 00:07:59.530 [2024-11-17 23:07:55.927042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:55.927079] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.530 [2024-11-17 23:07:55.927096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:55.927152] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:18762 00:07:59.530 [2024-11-17 23:07:55.927168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:55.927223] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:5280832617179597129 len:2571 00:07:59.530 [2024-11-17 23:07:55.927238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.530 #17 NEW cov: 11810 ft: 13720 corp: 13/393b lim: 50 exec/s: 0 rss: 69Mb L: 40/48 MS: 1 InsertRepeatedBytes- 00:07:59.530 [2024-11-17 23:07:55.966894] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:59.530 [2024-11-17 23:07:55.966923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:55.966977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:59.530 [2024-11-17 23:07:55.966993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.530 #20 NEW cov: 11810 ft: 13756 corp: 14/417b lim: 50 exec/s: 0 rss: 69Mb L: 24/48 MS: 3 InsertByte-ChangeBinInt-InsertRepeatedBytes- 00:07:59.530 [2024-11-17 23:07:56.007101] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:07:59.530 [2024-11-17 23:07:56.007129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:56.007166] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:07:59.530 [2024-11-17 23:07:56.007182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.530 [2024-11-17 23:07:56.007236] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:07:59.530 [2024-11-17 23:07:56.007251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.530 #21 NEW cov: 11810 ft: 13915 corp: 15/450b lim: 50 exec/s: 0 rss: 69Mb L: 33/48 MS: 1 ShuffleBytes- 00:07:59.530 [2024-11-17 23:07:56.047131] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.531 [2024-11-17 23:07:56.047159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.531 [2024-11-17 23:07:56.047203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073189457919 len:65505 00:07:59.531 [2024-11-17 23:07:56.047219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.531 #22 NEW cov: 11810 ft: 13933 corp: 16/479b lim: 50 exec/s: 0 rss: 69Mb L: 29/48 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:07:59.531 [2024-11-17 23:07:56.087504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204128346984997088 len:57569 00:07:59.531 [2024-11-17 23:07:56.087538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.531 [2024-11-17 23:07:56.087577] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204114053333836000 len:57569 00:07:59.531 [2024-11-17 23:07:56.087593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.531 [2024-11-17 23:07:56.087656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:59.531 [2024-11-17 23:07:56.087672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.531 [2024-11-17 23:07:56.087726] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:59.531 [2024-11-17 23:07:56.087743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.531 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:07:59.531 #23 NEW cov: 11833 ft: 13996 corp: 17/522b lim: 50 exec/s: 0 rss: 70Mb L: 43/48 MS: 1 CrossOver- 00:07:59.531 [2024-11-17 23:07:56.127462] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174572 len:57569 00:07:59.531 [2024-11-17 23:07:56.127490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.531 [2024-11-17 23:07:56.127528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.531 [2024-11-17 23:07:56.127549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.531 [2024-11-17 23:07:56.127604] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:2571 00:07:59.531 [2024-11-17 23:07:56.127620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.791 #24 NEW cov: 11833 ft: 14112 corp: 18/552b lim: 50 exec/s: 0 rss: 70Mb L: 30/48 MS: 1 InsertByte- 00:07:59.791 [2024-11-17 23:07:56.167588] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198810218455264 len:65536 00:07:59.791 [2024-11-17 23:07:56.167615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.167657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198716249268448 len:57569 00:07:59.791 [2024-11-17 23:07:56.167673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.167727] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.167743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.791 #25 NEW cov: 11833 ft: 14133 corp: 19/585b lim: 50 exec/s: 0 rss: 70Mb L: 33/48 MS: 1 CMP- DE: "\366\377\377\377"- 00:07:59.791 [2024-11-17 23:07:56.207874] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.207902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.207947] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073187426303 len:65536 00:07:59.791 [2024-11-17 23:07:56.207963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.208015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446743940565565439 len:57569 00:07:59.791 [2024-11-17 23:07:56.208030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.208083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.208102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.791 #26 NEW cov: 11833 ft: 14154 corp: 20/629b lim: 50 exec/s: 26 rss: 70Mb L: 44/48 MS: 1 EraseBytes- 00:07:59.791 [2024-11-17 23:07:56.247739] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.247767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.247821] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073189457919 len:65505 00:07:59.791 [2024-11-17 23:07:56.247836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.791 #27 NEW cov: 11833 ft: 14179 corp: 21/655b lim: 50 exec/s: 27 rss: 70Mb L: 26/48 MS: 1 EraseBytes- 00:07:59.791 [2024-11-17 23:07:56.288078] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.288106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.288145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073187426303 len:65536 00:07:59.791 [2024-11-17 23:07:56.288160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.288212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446743940565511168 len:57569 00:07:59.791 [2024-11-17 23:07:56.288228] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.288282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.288297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.791 #28 NEW cov: 11833 ft: 14280 corp: 22/699b lim: 50 exec/s: 28 rss: 70Mb L: 44/48 MS: 1 ChangeBinInt- 00:07:59.791 [2024-11-17 23:07:56.328327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.328355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.328403] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073187426303 len:65536 00:07:59.791 [2024-11-17 23:07:56.328418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.328473] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446709855705104383 len:65536 00:07:59.791 [2024-11-17 23:07:56.328489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.328541] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198716251308000 len:57569 00:07:59.791 [2024-11-17 23:07:56.328557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.328609] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:16204198715729174752 len:2571 00:07:59.791 [2024-11-17 23:07:56.328624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:07:59.791 #29 NEW cov: 11833 ft: 14320 corp: 23/749b lim: 50 exec/s: 29 rss: 70Mb L: 50/50 MS: 1 CopyPart- 00:07:59.791 [2024-11-17 23:07:56.368326] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.368353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.368396] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.368412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.368464] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:07:59.791 [2024-11-17 23:07:56.368481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:07:59.791 [2024-11-17 23:07:56.368539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:783873591612596234 len:57569 00:07:59.791 [2024-11-17 23:07:56.368554] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:07:59.792 #30 NEW cov: 11833 ft: 14349 corp: 24/798b lim: 50 exec/s: 30 rss: 70Mb L: 49/50 MS: 1 CopyPart- 00:08:00.051 [2024-11-17 23:07:56.408350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16151631240898863328 len:1 00:08:00.051 [2024-11-17 23:07:56.408380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.408432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198711958380512 len:57569 00:08:00.052 [2024-11-17 23:07:56.408448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.408501] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:08:00.052 [2024-11-17 23:07:56.408517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.052 #31 NEW cov: 11833 ft: 14378 corp: 25/831b lim: 50 exec/s: 31 rss: 70Mb L: 33/50 MS: 1 ChangeBinInt- 00:08:00.052 [2024-11-17 23:07:56.448565] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.052 [2024-11-17 23:07:56.448594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.448629] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073187426303 len:65536 00:08:00.052 [2024-11-17 23:07:56.448645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.448698] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446743940565565439 len:57569 00:08:00.052 [2024-11-17 23:07:56.448714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.448767] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:08:00.052 [2024-11-17 23:07:56.448782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.052 #32 NEW cov: 11833 ft: 14387 corp: 26/875b lim: 50 exec/s: 32 rss: 70Mb L: 44/50 MS: 1 ShuffleBytes- 00:08:00.052 [2024-11-17 23:07:56.488460] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.052 [2024-11-17 23:07:56.488492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.488549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:63297651218645504 len:57569 00:08:00.052 [2024-11-17 23:07:56.488566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 #33 NEW cov: 11833 ft: 14399 corp: 27/904b lim: 50 exec/s: 33 rss: 70Mb L: 29/50 MS: 1 CMP- DE: "\001\002\000\000"- 00:08:00.052 [2024-11-17 23:07:56.528817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:00.052 [2024-11-17 23:07:56.528845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.528886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:08:00.052 [2024-11-17 23:07:56.528903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.528956] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:00.052 [2024-11-17 23:07:56.528971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.529025] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 00:08:00.052 [2024-11-17 23:07:56.529041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.052 #34 NEW cov: 11833 ft: 14412 corp: 28/950b lim: 50 exec/s: 34 rss: 70Mb L: 46/50 MS: 1 CopyPart- 00:08:00.052 [2024-11-17 23:07:56.568840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:00.052 [2024-11-17 23:07:56.568868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.568909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:4278190080 len:1280 00:08:00.052 [2024-11-17 23:07:56.568924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.568979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:00.052 [2024-11-17 23:07:56.568993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.052 #35 NEW cov: 11833 ft: 14423 corp: 29/983b lim: 50 exec/s: 35 rss: 70Mb L: 33/50 MS: 1 ChangeBinInt- 00:08:00.052 [2024-11-17 23:07:56.608806] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16186184317219692768 len:57569 00:08:00.052 [2024-11-17 23:07:56.608834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.608890] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:08:00.052 [2024-11-17 23:07:56.608906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 #36 NEW cov: 11833 ft: 14445 corp: 30/1011b lim: 50 exec/s: 36 rss: 70Mb L: 28/50 MS: 1 EraseBytes- 00:08:00.052 [2024-11-17 23:07:56.649030] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16151631240898863328 len:92 00:08:00.052 [2024-11-17 23:07:56.649062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.649104] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198711958380512 len:57569 00:08:00.052 [2024-11-17 23:07:56.649120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.052 [2024-11-17 23:07:56.649174] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57569 00:08:00.052 [2024-11-17 23:07:56.649189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.312 #37 NEW cov: 11833 ft: 14448 corp: 31/1044b lim: 50 exec/s: 37 rss: 70Mb L: 33/50 MS: 1 ChangeByte- 00:08:00.312 [2024-11-17 23:07:56.689175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.312 [2024-11-17 23:07:56.689203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.689240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174564 len:57569 00:08:00.312 [2024-11-17 23:07:56.689256] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.689310] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:2571 00:08:00.312 [2024-11-17 23:07:56.689327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.312 #38 NEW cov: 11833 ft: 14456 corp: 32/1074b lim: 50 exec/s: 38 rss: 70Mb L: 30/50 MS: 1 InsertByte- 00:08:00.312 [2024-11-17 23:07:56.729260] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.312 [2024-11-17 23:07:56.729288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.729327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204197757936853216 len:513 00:08:00.312 [2024-11-17 23:07:56.729343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.729397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198711971078368 len:57569 00:08:00.312 [2024-11-17 23:07:56.729412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.312 #39 NEW cov: 11833 ft: 14475 corp: 33/1109b lim: 50 exec/s: 39 rss: 70Mb L: 35/50 MS: 1 CopyPart- 00:08:00.312 [2024-11-17 23:07:56.769386] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 00:08:00.312 [2024-11-17 23:07:56.769414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.769450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:1095216660480 len:1280 00:08:00.312 [2024-11-17 23:07:56.769466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.769523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 00:08:00.312 [2024-11-17 23:07:56.769545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.312 #40 NEW cov: 11833 ft: 14479 corp: 34/1142b lim: 50 exec/s: 40 rss: 70Mb L: 33/50 MS: 1 ShuffleBytes- 00:08:00.312 [2024-11-17 23:07:56.809515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.312 [2024-11-17 23:07:56.809547] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.809585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:08:00.312 [2024-11-17 23:07:56.809601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.809664] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204197796606173408 len:65536 00:08:00.312 [2024-11-17 23:07:56.809679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.312 #41 NEW cov: 11833 ft: 14487 corp: 35/1180b lim: 50 exec/s: 41 rss: 70Mb L: 38/50 MS: 1 InsertRepeatedBytes- 00:08:00.312 [2024-11-17 23:07:56.849544] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.312 [2024-11-17 23:07:56.849571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.312 [2024-11-17 23:07:56.849624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729182688 len:57569 00:08:00.312 [2024-11-17 23:07:56.849640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.312 #42 NEW cov: 11833 ft: 14527 corp: 36/1204b lim: 50 exec/s: 42 rss: 70Mb L: 24/50 MS: 1 EraseBytes- 00:08:00.312 [2024-11-17 23:07:56.889881] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:10655763974808723680 len:57569 00:08:00.312 [2024-11-17 23:07:56.889909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.313 [2024-11-17 23:07:56.889955] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:57569 00:08:00.313 [2024-11-17 23:07:56.889970] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.313 [2024-11-17 23:07:56.890026] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204197946930028768 len:11566 00:08:00.313 [2024-11-17 23:07:56.890042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.313 [2024-11-17 23:07:56.890095] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:3255307777713450285 len:11566 00:08:00.313 [2024-11-17 23:07:56.890110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.313 #43 NEW cov: 11833 ft: 14541 corp: 37/1250b lim: 50 exec/s: 43 rss: 70Mb L: 46/50 MS: 1 InsertRepeatedBytes- 00:08:00.573 [2024-11-17 23:07:56.930029] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.573 [2024-11-17 23:07:56.930058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:56.930094] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:9223372036332650495 len:65536 00:08:00.573 [2024-11-17 23:07:56.930109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:56.930162] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18446743940565565439 len:57569 00:08:00.573 [2024-11-17 23:07:56.930181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:56.930237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57569 00:08:00.573 [2024-11-17 23:07:56.930253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.573 #44 NEW cov: 11833 ft: 14568 corp: 38/1294b lim: 50 exec/s: 44 rss: 70Mb L: 44/50 MS: 1 ChangeBit- 00:08:00.573 [2024-11-17 23:07:56.969786] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:8825501084366305914 len:31355 00:08:00.573 [2024-11-17 23:07:56.969815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 #45 NEW cov: 11833 ft: 14868 corp: 39/1311b lim: 50 exec/s: 45 rss: 70Mb L: 17/50 MS: 1 InsertRepeatedBytes- 00:08:00.573 [2024-11-17 23:07:57.010208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.573 [2024-11-17 23:07:57.010236] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.010279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729182688 len:57569 00:08:00.573 [2024-11-17 23:07:57.010295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.010349] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:57600 00:08:00.573 [2024-11-17 23:07:57.010365] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.010420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:16204198715729174752 len:57355 00:08:00.573 [2024-11-17 23:07:57.010435] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:00.573 #46 NEW cov: 11833 ft: 14872 corp: 40/1359b lim: 50 exec/s: 46 rss: 70Mb L: 48/50 MS: 1 CopyPart- 00:08:00.573 [2024-11-17 23:07:57.050088] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174572 len:57569 00:08:00.573 [2024-11-17 23:07:57.050115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.050170] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174752 len:2571 00:08:00.573 [2024-11-17 23:07:57.050187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.573 #47 NEW cov: 11833 ft: 14885 corp: 41/1379b lim: 50 exec/s: 47 rss: 70Mb L: 20/50 MS: 1 EraseBytes- 00:08:00.573 [2024-11-17 23:07:57.090247] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.573 [2024-11-17 23:07:57.090274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.090329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174572 len:57569 00:08:00.573 [2024-11-17 23:07:57.090345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.573 #48 NEW cov: 11833 ft: 14889 corp: 42/1408b lim: 50 exec/s: 48 rss: 70Mb L: 29/50 MS: 1 ChangeByte- 00:08:00.573 [2024-11-17 23:07:57.120431] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174752 len:57569 00:08:00.573 [2024-11-17 23:07:57.120463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.120504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16147340770433622052 len:57569 00:08:00.573 [2024-11-17 23:07:57.120520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.120579] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:16204198715729174752 len:2571 00:08:00.573 [2024-11-17 23:07:57.120595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:00.573 #49 NEW cov: 11833 ft: 14895 corp: 43/1438b lim: 50 exec/s: 49 rss: 70Mb L: 30/50 MS: 1 ChangeByte- 00:08:00.573 [2024-11-17 23:07:57.160485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:16204198715729174572 len:57569 00:08:00.573 [2024-11-17 23:07:57.160513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.573 [2024-11-17 23:07:57.160575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:16204198715729174624 len:2571 00:08:00.573 [2024-11-17 23:07:57.160592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.833 #50 NEW cov: 11833 ft: 14901 corp: 44/1458b lim: 50 exec/s: 50 rss: 70Mb L: 20/50 MS: 1 ChangeBit- 00:08:00.833 [2024-11-17 23:07:57.200581] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:3233831788902089184 len:57569 00:08:00.833 [2024-11-17 23:07:57.200609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:00.833 [2024-11-17 23:07:57.200661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:6980826678874398944 len:57355 00:08:00.833 [2024-11-17 23:07:57.200678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:00.833 #51 NEW cov: 11833 ft: 14917 corp: 45/1479b lim: 50 exec/s: 25 rss: 70Mb L: 21/50 MS: 1 InsertByte- 00:08:00.833 #51 DONE cov: 11833 ft: 14917 corp: 45/1479b lim: 50 exec/s: 25 rss: 70Mb 00:08:00.833 ###### Recommended dictionary. ###### 00:08:00.833 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:00.833 "\366\377\377\377" # Uses: 0 00:08:00.833 "\001\002\000\000" # Uses: 0 00:08:00.833 ###### End of recommended dictionary. ###### 00:08:00.833 Done 51 runs in 2 second(s) 00:08:00.833 23:07:57 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_19.conf 00:08:00.833 23:07:57 -- ../common.sh@72 -- # (( i++ )) 00:08:00.833 23:07:57 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:00.833 23:07:57 -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:08:00.833 23:07:57 -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:08:00.833 23:07:57 -- nvmf/run.sh@24 -- # local timen=1 00:08:00.833 23:07:57 -- nvmf/run.sh@25 -- # local core=0x1 00:08:00.833 23:07:57 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:00.833 23:07:57 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:08:00.833 23:07:57 -- nvmf/run.sh@29 -- # printf %02d 20 00:08:00.833 23:07:57 -- nvmf/run.sh@29 -- # port=4420 00:08:00.833 23:07:57 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:00.833 23:07:57 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:08:00.833 23:07:57 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:00.833 23:07:57 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 -r /var/tmp/spdk20.sock 00:08:00.833 [2024-11-17 23:07:57.386837] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:00.834 [2024-11-17 23:07:57.386924] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305722 ] 00:08:00.834 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.092 [2024-11-17 23:07:57.571126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.092 [2024-11-17 23:07:57.634059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.092 [2024-11-17 23:07:57.634185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.092 [2024-11-17 23:07:57.692085] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.351 [2024-11-17 23:07:57.708414] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:01.351 INFO: Running with entropic power schedule (0xFF, 100). 00:08:01.351 INFO: Seed: 2683555378 00:08:01.351 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:01.351 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:01.351 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:08:01.351 INFO: A corpus is not provided, starting from an empty corpus 00:08:01.351 #2 INITED exec/s: 0 rss: 60Mb 00:08:01.351 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:01.351 This may also happen if the target rejected all inputs we tried so far 00:08:01.351 [2024-11-17 23:07:57.773543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.351 [2024-11-17 23:07:57.773575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.610 NEW_FUNC[1/671]: 0x45c9d8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:08:01.610 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:01.610 #16 NEW cov: 11657 ft: 11660 corp: 2/35b lim: 90 exec/s: 0 rss: 69Mb L: 34/34 MS: 4 ChangeBit-CrossOver-InsertByte-InsertRepeatedBytes- 00:08:01.610 [2024-11-17 23:07:58.094962] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.610 [2024-11-17 23:07:58.095022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.610 [2024-11-17 23:07:58.095106] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.610 [2024-11-17 23:07:58.095137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.610 [2024-11-17 23:07:58.095217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:01.611 [2024-11-17 23:07:58.095246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.611 [2024-11-17 23:07:58.095326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:01.611 [2024-11-17 23:07:58.095353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.611 NEW_FUNC[1/1]: 0x16a9038 in _nvme_qpair_complete_abort_queued_reqs /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:593 00:08:01.611 #19 NEW cov: 11777 ft: 13234 corp: 3/114b lim: 90 exec/s: 0 rss: 69Mb L: 79/79 MS: 3 CopyPart-ChangeBit-InsertRepeatedBytes- 00:08:01.611 [2024-11-17 23:07:58.144527] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.611 [2024-11-17 23:07:58.144563] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.611 [2024-11-17 23:07:58.144621] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.611 [2024-11-17 23:07:58.144636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.611 #20 NEW cov: 11783 ft: 13744 corp: 4/153b lim: 90 exec/s: 0 rss: 69Mb L: 39/79 MS: 1 InsertRepeatedBytes- 00:08:01.611 [2024-11-17 23:07:58.184651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.611 [2024-11-17 23:07:58.184679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.611 [2024-11-17 23:07:58.184733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.611 [2024-11-17 23:07:58.184748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.611 #26 NEW cov: 11868 ft: 14047 corp: 5/193b lim: 90 exec/s: 0 rss: 69Mb L: 40/79 MS: 1 InsertByte- 00:08:01.870 [2024-11-17 23:07:58.234652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.870 [2024-11-17 23:07:58.234681] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.870 #27 NEW cov: 11868 ft: 14094 corp: 6/214b lim: 90 exec/s: 0 rss: 69Mb L: 21/79 MS: 1 EraseBytes- 00:08:01.870 [2024-11-17 23:07:58.274887] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.870 [2024-11-17 23:07:58.274915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.274968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.871 [2024-11-17 23:07:58.274985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.871 #28 NEW cov: 11868 ft: 14155 corp: 7/261b lim: 90 exec/s: 0 rss: 69Mb L: 47/79 MS: 1 CopyPart- 00:08:01.871 [2024-11-17 23:07:58.314980] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.871 [2024-11-17 23:07:58.315007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.315061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.871 [2024-11-17 23:07:58.315078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.871 #29 NEW cov: 11868 ft: 14244 corp: 8/301b lim: 90 exec/s: 0 rss: 69Mb L: 40/79 MS: 1 InsertByte- 00:08:01.871 [2024-11-17 23:07:58.355090] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.871 [2024-11-17 23:07:58.355118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.355174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.871 [2024-11-17 23:07:58.355189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.871 #30 NEW cov: 11868 ft: 14315 corp: 9/353b lim: 90 exec/s: 0 rss: 69Mb L: 52/79 MS: 1 CrossOver- 00:08:01.871 [2024-11-17 23:07:58.395518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.871 [2024-11-17 23:07:58.395551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.395595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.871 [2024-11-17 23:07:58.395614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.395666] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:01.871 [2024-11-17 23:07:58.395682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.395734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:01.871 [2024-11-17 23:07:58.395748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.871 #31 NEW cov: 11868 ft: 14428 corp: 10/439b lim: 90 exec/s: 0 rss: 69Mb L: 86/86 MS: 1 InsertRepeatedBytes- 00:08:01.871 [2024-11-17 23:07:58.435663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.871 [2024-11-17 23:07:58.435691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.435733] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.871 [2024-11-17 23:07:58.435748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.435801] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:01.871 [2024-11-17 23:07:58.435817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.435870] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:01.871 [2024-11-17 23:07:58.435886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:01.871 #32 NEW cov: 11868 ft: 14492 corp: 11/518b lim: 90 exec/s: 0 rss: 69Mb L: 79/86 MS: 1 ShuffleBytes- 00:08:01.871 [2024-11-17 23:07:58.475453] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:01.871 [2024-11-17 23:07:58.475482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:01.871 [2024-11-17 23:07:58.475542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:01.871 [2024-11-17 23:07:58.475558] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 #33 NEW cov: 11868 ft: 14579 corp: 12/570b lim: 90 exec/s: 0 rss: 69Mb L: 52/86 MS: 1 InsertRepeatedBytes- 00:08:02.131 [2024-11-17 23:07:58.515599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.131 [2024-11-17 23:07:58.515627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.515678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.131 [2024-11-17 23:07:58.515694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 #34 NEW cov: 11868 ft: 14590 corp: 13/622b lim: 90 exec/s: 0 rss: 69Mb L: 52/86 MS: 1 CrossOver- 00:08:02.131 [2024-11-17 23:07:58.555998] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.131 [2024-11-17 23:07:58.556026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.556065] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.131 [2024-11-17 23:07:58.556081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.556135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.131 [2024-11-17 23:07:58.556150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.556200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.131 [2024-11-17 23:07:58.556215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.131 #35 NEW cov: 11868 ft: 14638 corp: 14/711b lim: 90 exec/s: 0 rss: 69Mb L: 89/89 MS: 1 InsertRepeatedBytes- 00:08:02.131 [2024-11-17 23:07:58.606107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.131 [2024-11-17 23:07:58.606135] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.606177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.131 [2024-11-17 23:07:58.606192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.606245] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.131 [2024-11-17 23:07:58.606260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.606313] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.131 [2024-11-17 23:07:58.606328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.131 #36 NEW cov: 11868 ft: 14650 corp: 15/794b lim: 90 exec/s: 0 rss: 70Mb L: 83/89 MS: 1 InsertRepeatedBytes- 00:08:02.131 [2024-11-17 23:07:58.646215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.131 [2024-11-17 23:07:58.646243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.646289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.131 [2024-11-17 23:07:58.646304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.646357] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.131 [2024-11-17 23:07:58.646373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.646426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.131 [2024-11-17 23:07:58.646441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.131 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:02.131 #37 NEW cov: 11891 ft: 14698 corp: 16/883b lim: 90 exec/s: 0 rss: 70Mb L: 89/89 MS: 1 ChangeBit- 00:08:02.131 [2024-11-17 23:07:58.696091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.131 [2024-11-17 23:07:58.696120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.696173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.131 [2024-11-17 23:07:58.696189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.131 #38 NEW cov: 11891 ft: 14753 corp: 17/935b lim: 90 exec/s: 0 rss: 70Mb L: 52/89 MS: 1 ChangeBit- 00:08:02.131 [2024-11-17 23:07:58.736173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.131 [2024-11-17 23:07:58.736202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.131 [2024-11-17 23:07:58.736259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.131 [2024-11-17 23:07:58.736276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 #39 NEW cov: 11891 ft: 14838 corp: 18/982b lim: 90 exec/s: 39 rss: 70Mb L: 47/89 MS: 1 ChangeBinInt- 00:08:02.391 [2024-11-17 23:07:58.776335] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.391 [2024-11-17 23:07:58.776364] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-11-17 23:07:58.776423] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.391 [2024-11-17 23:07:58.776439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 #40 NEW cov: 11891 ft: 14884 corp: 19/1022b lim: 90 exec/s: 40 rss: 70Mb L: 40/89 MS: 1 ChangeByte- 00:08:02.391 [2024-11-17 23:07:58.816712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.391 [2024-11-17 23:07:58.816740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-11-17 23:07:58.816777] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.391 [2024-11-17 23:07:58.816793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 [2024-11-17 23:07:58.816846] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.391 [2024-11-17 23:07:58.816863] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.391 [2024-11-17 23:07:58.816918] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.391 [2024-11-17 23:07:58.816934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.391 #41 NEW cov: 11891 ft: 14930 corp: 20/1101b lim: 90 exec/s: 41 rss: 70Mb L: 79/89 MS: 1 CrossOver- 00:08:02.391 [2024-11-17 23:07:58.856589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.391 [2024-11-17 23:07:58.856617] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-11-17 23:07:58.856664] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.391 [2024-11-17 23:07:58.856679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 #42 NEW cov: 11891 ft: 14949 corp: 21/1148b lim: 90 exec/s: 42 rss: 70Mb L: 47/89 MS: 1 ChangeBit- 00:08:02.391 [2024-11-17 23:07:58.896547] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.391 [2024-11-17 23:07:58.896575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 #43 NEW cov: 11891 ft: 14998 corp: 22/1180b lim: 90 exec/s: 43 rss: 70Mb L: 32/89 MS: 1 EraseBytes- 00:08:02.391 [2024-11-17 23:07:58.936762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.391 [2024-11-17 23:07:58.936791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.391 [2024-11-17 23:07:58.936851] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.391 [2024-11-17 23:07:58.936867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.391 #44 NEW cov: 11891 ft: 15009 corp: 23/1220b lim: 90 exec/s: 44 rss: 70Mb L: 40/89 MS: 1 ChangeBinInt- 00:08:02.391 [2024-11-17 23:07:58.977181] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.392 [2024-11-17 23:07:58.977209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.392 [2024-11-17 23:07:58.977246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.392 [2024-11-17 23:07:58.977261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.392 [2024-11-17 23:07:58.977314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.392 [2024-11-17 23:07:58.977330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.392 [2024-11-17 23:07:58.977381] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.392 [2024-11-17 23:07:58.977396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.392 #45 NEW cov: 11891 ft: 15011 corp: 24/1299b lim: 90 exec/s: 45 rss: 70Mb L: 79/89 MS: 1 ChangeBinInt- 00:08:02.719 [2024-11-17 23:07:59.017109] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.719 [2024-11-17 23:07:59.017137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.719 [2024-11-17 23:07:59.017174] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.719 [2024-11-17 23:07:59.017190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.719 [2024-11-17 23:07:59.017224] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.719 [2024-11-17 23:07:59.017239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.719 #46 NEW cov: 11891 ft: 15301 corp: 25/1356b lim: 90 exec/s: 46 rss: 70Mb L: 57/89 MS: 1 CrossOver- 00:08:02.719 [2024-11-17 23:07:59.057115] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.719 [2024-11-17 23:07:59.057143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.719 [2024-11-17 23:07:59.057198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.719 [2024-11-17 23:07:59.057212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.719 #47 NEW cov: 11891 ft: 15303 corp: 26/1403b lim: 90 exec/s: 47 rss: 70Mb L: 47/89 MS: 1 CrossOver- 00:08:02.719 [2024-11-17 23:07:59.097231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.719 [2024-11-17 23:07:59.097259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.720 [2024-11-17 23:07:59.097315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.720 [2024-11-17 23:07:59.097329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.720 #48 NEW cov: 11891 ft: 15310 corp: 27/1449b lim: 90 exec/s: 48 rss: 70Mb L: 46/89 MS: 1 EraseBytes- 00:08:02.720 [2024-11-17 23:07:59.137359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.720 [2024-11-17 23:07:59.137390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.720 [2024-11-17 23:07:59.137444] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.720 [2024-11-17 23:07:59.137460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.720 #49 NEW cov: 11891 ft: 15318 corp: 28/1501b lim: 90 exec/s: 49 rss: 70Mb L: 52/89 MS: 1 ShuffleBytes- 00:08:02.720 [2024-11-17 23:07:59.177427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.720 [2024-11-17 23:07:59.177454] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.720 [2024-11-17 23:07:59.177500] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.720 [2024-11-17 23:07:59.177517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.720 #50 NEW cov: 11891 ft: 15319 corp: 29/1547b lim: 90 exec/s: 50 rss: 70Mb L: 46/89 MS: 1 EraseBytes- 00:08:02.720 [2024-11-17 23:07:59.217871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.720 [2024-11-17 23:07:59.217899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.720 [2024-11-17 23:07:59.217939] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.720 [2024-11-17 23:07:59.217954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.720 [2024-11-17 23:07:59.218007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.720 [2024-11-17 23:07:59.218022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.720 [2024-11-17 23:07:59.218076] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.720 [2024-11-17 23:07:59.218091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.720 #51 NEW cov: 11891 ft: 15342 corp: 30/1626b lim: 90 exec/s: 51 rss: 70Mb L: 79/89 MS: 1 ChangeBinInt- 00:08:02.720 [2024-11-17 23:07:59.257542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.720 [2024-11-17 23:07:59.257569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.720 #52 NEW cov: 11891 ft: 15365 corp: 31/1655b lim: 90 exec/s: 52 rss: 70Mb L: 29/89 MS: 1 EraseBytes- 00:08:02.996 [2024-11-17 23:07:59.298081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.298111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.298150] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.996 [2024-11-17 23:07:59.298166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.298219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.996 [2024-11-17 23:07:59.298235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.298290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:02.996 [2024-11-17 23:07:59.298307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:02.996 #53 NEW cov: 11891 ft: 15389 corp: 32/1730b lim: 90 exec/s: 53 rss: 70Mb L: 75/89 MS: 1 EraseBytes- 00:08:02.996 [2024-11-17 23:07:59.348091] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.348119] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.348166] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.996 [2024-11-17 23:07:59.348182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.348236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:02.996 [2024-11-17 23:07:59.348252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:02.996 #54 NEW cov: 11891 ft: 15403 corp: 33/1791b lim: 90 exec/s: 54 rss: 70Mb L: 61/89 MS: 1 EraseBytes- 00:08:02.996 [2024-11-17 23:07:59.387860] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.387887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 #55 NEW cov: 11891 ft: 15416 corp: 34/1826b lim: 90 exec/s: 55 rss: 70Mb L: 35/89 MS: 1 CrossOver- 00:08:02.996 [2024-11-17 23:07:59.428158] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.428187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.428244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.996 [2024-11-17 23:07:59.428258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.996 #56 NEW cov: 11891 ft: 15500 corp: 35/1878b lim: 90 exec/s: 56 rss: 70Mb L: 52/89 MS: 1 ChangeASCIIInt- 00:08:02.996 [2024-11-17 23:07:59.468124] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.468153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 #57 NEW cov: 11891 ft: 15555 corp: 36/1910b lim: 90 exec/s: 57 rss: 70Mb L: 32/89 MS: 1 ChangeByte- 00:08:02.996 [2024-11-17 23:07:59.508395] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.508423] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.508475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.996 [2024-11-17 23:07:59.508490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.996 #58 NEW cov: 11891 ft: 15570 corp: 37/1947b lim: 90 exec/s: 58 rss: 70Mb L: 37/89 MS: 1 EraseBytes- 00:08:02.996 [2024-11-17 23:07:59.548507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.548538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:02.996 [2024-11-17 23:07:59.548580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:02.996 [2024-11-17 23:07:59.548596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:02.996 #59 NEW cov: 11891 ft: 15584 corp: 38/1999b lim: 90 exec/s: 59 rss: 70Mb L: 52/89 MS: 1 ChangeASCIIInt- 00:08:02.996 [2024-11-17 23:07:59.588540] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:02.996 [2024-11-17 23:07:59.588567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.256 #60 NEW cov: 11891 ft: 15598 corp: 39/2034b lim: 90 exec/s: 60 rss: 70Mb L: 35/89 MS: 1 ChangeByte- 00:08:03.256 [2024-11-17 23:07:59.629053] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.256 [2024-11-17 23:07:59.629080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.629127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.256 [2024-11-17 23:07:59.629143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.629195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.256 [2024-11-17 23:07:59.629211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.629256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:03.256 [2024-11-17 23:07:59.629271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.256 #61 NEW cov: 11891 ft: 15602 corp: 40/2116b lim: 90 exec/s: 61 rss: 70Mb L: 82/89 MS: 1 CopyPart- 00:08:03.256 [2024-11-17 23:07:59.669146] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.256 [2024-11-17 23:07:59.669174] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.669212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.256 [2024-11-17 23:07:59.669226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.669277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.256 [2024-11-17 23:07:59.669293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.669343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:03.256 [2024-11-17 23:07:59.669358] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.256 #62 NEW cov: 11891 ft: 15718 corp: 41/2202b lim: 90 exec/s: 62 rss: 70Mb L: 86/89 MS: 1 CopyPart- 00:08:03.256 [2024-11-17 23:07:59.719132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.256 [2024-11-17 23:07:59.719159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.719196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.256 [2024-11-17 23:07:59.719211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.719264] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.256 [2024-11-17 23:07:59.719280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.256 #63 NEW cov: 11891 ft: 15734 corp: 42/2259b lim: 90 exec/s: 63 rss: 70Mb L: 57/89 MS: 1 CopyPart- 00:08:03.256 [2024-11-17 23:07:59.759404] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:08:03.256 [2024-11-17 23:07:59.759436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.759473] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:08:03.256 [2024-11-17 23:07:59.759488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.759544] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:08:03.256 [2024-11-17 23:07:59.759560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.256 [2024-11-17 23:07:59.759611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:08:03.256 [2024-11-17 23:07:59.759626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:03.256 #64 pulse cov: 11891 ft: 15749 corp: 42/2259b lim: 90 exec/s: 32 rss: 70Mb 00:08:03.256 #64 NEW cov: 11891 ft: 15749 corp: 43/2348b lim: 90 exec/s: 32 rss: 70Mb L: 89/89 MS: 1 ChangeByte- 00:08:03.256 #64 DONE cov: 11891 ft: 15749 corp: 43/2348b lim: 90 exec/s: 32 rss: 70Mb 00:08:03.256 Done 64 runs in 2 second(s) 00:08:03.515 23:07:59 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_20.conf 00:08:03.515 23:07:59 -- ../common.sh@72 -- # (( i++ )) 00:08:03.515 23:07:59 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:03.515 23:07:59 -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:08:03.515 23:07:59 -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:08:03.515 23:07:59 -- nvmf/run.sh@24 -- # local timen=1 00:08:03.515 23:07:59 -- nvmf/run.sh@25 -- # local core=0x1 00:08:03.515 23:07:59 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:03.515 23:07:59 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:08:03.515 23:07:59 -- nvmf/run.sh@29 -- # printf %02d 21 00:08:03.515 23:07:59 -- nvmf/run.sh@29 -- # port=4421 00:08:03.515 23:07:59 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:03.515 23:07:59 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:08:03.515 23:07:59 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:03.516 23:07:59 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 -r /var/tmp/spdk21.sock 00:08:03.516 [2024-11-17 23:07:59.934242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.516 [2024-11-17 23:07:59.934327] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306163 ] 00:08:03.516 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.516 [2024-11-17 23:08:00.117461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.775 [2024-11-17 23:08:00.189190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.775 [2024-11-17 23:08:00.189320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.775 [2024-11-17 23:08:00.247925] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.775 [2024-11-17 23:08:00.264241] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:08:03.775 INFO: Running with entropic power schedule (0xFF, 100). 00:08:03.775 INFO: Seed: 942598045 00:08:03.775 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:03.775 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:03.775 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:08:03.775 INFO: A corpus is not provided, starting from an empty corpus 00:08:03.775 #2 INITED exec/s: 0 rss: 61Mb 00:08:03.775 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:03.775 This may also happen if the target rejected all inputs we tried so far 00:08:03.775 [2024-11-17 23:08:00.334560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:03.775 [2024-11-17 23:08:00.334599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:03.775 [2024-11-17 23:08:00.334716] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:03.775 [2024-11-17 23:08:00.334737] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:03.775 [2024-11-17 23:08:00.334848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:03.775 [2024-11-17 23:08:00.334869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:03.775 [2024-11-17 23:08:00.334987] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:03.775 [2024-11-17 23:08:00.335007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.035 NEW_FUNC[1/671]: 0x45fc08 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:08:04.035 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:04.035 #16 NEW cov: 11646 ft: 11646 corp: 2/47b lim: 50 exec/s: 0 rss: 68Mb L: 46/46 MS: 4 ChangeByte-ChangeBit-CrossOver-InsertRepeatedBytes- 00:08:04.295 [2024-11-17 23:08:00.675043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-11-17 23:08:00.675091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.675220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.295 [2024-11-17 23:08:00.675246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.675376] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.295 [2024-11-17 23:08:00.675404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.295 NEW_FUNC[1/1]: 0x170aae8 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1456 00:08:04.295 #17 NEW cov: 11761 ft: 12528 corp: 3/85b lim: 50 exec/s: 0 rss: 68Mb L: 38/46 MS: 1 EraseBytes- 00:08:04.295 [2024-11-17 23:08:00.725080] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-11-17 23:08:00.725114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.725232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.295 [2024-11-17 23:08:00.725255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.725374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.295 [2024-11-17 23:08:00.725393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.295 #26 NEW cov: 11767 ft: 12857 corp: 4/115b lim: 50 exec/s: 0 rss: 68Mb L: 30/46 MS: 4 CopyPart-ShuffleBytes-ChangeBinInt-InsertRepeatedBytes- 00:08:04.295 [2024-11-17 23:08:00.765168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-11-17 23:08:00.765203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.765318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.295 [2024-11-17 23:08:00.765337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.765470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.295 [2024-11-17 23:08:00.765491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.295 #27 NEW cov: 11852 ft: 13064 corp: 5/146b lim: 50 exec/s: 0 rss: 68Mb L: 31/46 MS: 1 InsertByte- 00:08:04.295 [2024-11-17 23:08:00.815553] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-11-17 23:08:00.815582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.815683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.295 [2024-11-17 23:08:00.815707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.815828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.295 [2024-11-17 23:08:00.815849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.815968] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.295 [2024-11-17 23:08:00.815991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.295 #28 NEW cov: 11852 ft: 13237 corp: 6/192b lim: 50 exec/s: 0 rss: 69Mb L: 46/46 MS: 1 ChangeByte- 00:08:04.295 [2024-11-17 23:08:00.855183] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-11-17 23:08:00.855213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.855346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.295 [2024-11-17 23:08:00.855369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.295 #29 NEW cov: 11852 ft: 13613 corp: 7/215b lim: 50 exec/s: 0 rss: 69Mb L: 23/46 MS: 1 EraseBytes- 00:08:04.295 [2024-11-17 23:08:00.895774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.295 [2024-11-17 23:08:00.895804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.895894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.295 [2024-11-17 23:08:00.895916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.896038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.295 [2024-11-17 23:08:00.896060] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.295 [2024-11-17 23:08:00.896184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.295 [2024-11-17 23:08:00.896205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.554 #30 NEW cov: 11852 ft: 13696 corp: 8/261b lim: 50 exec/s: 0 rss: 69Mb L: 46/46 MS: 1 ChangeByte- 00:08:04.554 [2024-11-17 23:08:00.935498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.554 [2024-11-17 23:08:00.935528] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:00.935665] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.554 [2024-11-17 23:08:00.935687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.554 #31 NEW cov: 11852 ft: 13752 corp: 9/281b lim: 50 exec/s: 0 rss: 69Mb L: 20/46 MS: 1 EraseBytes- 00:08:04.554 [2024-11-17 23:08:00.976119] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.554 [2024-11-17 23:08:00.976150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:00.976258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.554 [2024-11-17 23:08:00.976281] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:00.976406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.554 [2024-11-17 23:08:00.976430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:00.976558] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.554 [2024-11-17 23:08:00.976580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.554 #32 NEW cov: 11852 ft: 13785 corp: 10/323b lim: 50 exec/s: 0 rss: 69Mb L: 42/46 MS: 1 CMP- DE: "\017\000\000\000"- 00:08:04.554 [2024-11-17 23:08:01.026162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.554 [2024-11-17 23:08:01.026194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.026326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.554 [2024-11-17 23:08:01.026346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.026462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.554 [2024-11-17 23:08:01.026481] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.026604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.554 [2024-11-17 23:08:01.026627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.554 #33 NEW cov: 11852 ft: 13885 corp: 11/365b lim: 50 exec/s: 0 rss: 69Mb L: 42/46 MS: 1 ChangeBinInt- 00:08:04.554 [2024-11-17 23:08:01.075905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.554 [2024-11-17 23:08:01.075939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.076058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.554 [2024-11-17 23:08:01.076081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.076204] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.554 [2024-11-17 23:08:01.076230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.554 #34 NEW cov: 11852 ft: 13965 corp: 12/395b lim: 50 exec/s: 0 rss: 69Mb L: 30/46 MS: 1 ChangeBit- 00:08:04.554 [2024-11-17 23:08:01.116490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.554 [2024-11-17 23:08:01.116520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.116613] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.554 [2024-11-17 23:08:01.116634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.116753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.554 [2024-11-17 23:08:01.116773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.116892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.554 [2024-11-17 23:08:01.116914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.554 #35 NEW cov: 11852 ft: 14016 corp: 13/441b lim: 50 exec/s: 0 rss: 69Mb L: 46/46 MS: 1 ChangeBit- 00:08:04.554 [2024-11-17 23:08:01.156568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.554 [2024-11-17 23:08:01.156599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.156690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.554 [2024-11-17 23:08:01.156709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.156830] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.554 [2024-11-17 23:08:01.156849] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.554 [2024-11-17 23:08:01.156971] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.554 [2024-11-17 23:08:01.156993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.814 #36 NEW cov: 11852 ft: 14056 corp: 14/482b lim: 50 exec/s: 0 rss: 69Mb L: 41/46 MS: 1 InsertRepeatedBytes- 00:08:04.814 [2024-11-17 23:08:01.206503] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.814 [2024-11-17 23:08:01.206536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.206667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.814 [2024-11-17 23:08:01.206700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.206829] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.814 [2024-11-17 23:08:01.206859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.814 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:04.814 #37 NEW cov: 11875 ft: 14099 corp: 15/512b lim: 50 exec/s: 0 rss: 69Mb L: 30/46 MS: 1 EraseBytes- 00:08:04.814 [2024-11-17 23:08:01.256402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.814 [2024-11-17 23:08:01.256433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.256554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.814 [2024-11-17 23:08:01.256576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.814 #38 NEW cov: 11875 ft: 14163 corp: 16/540b lim: 50 exec/s: 0 rss: 69Mb L: 28/46 MS: 1 EraseBytes- 00:08:04.814 [2024-11-17 23:08:01.296235] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.814 [2024-11-17 23:08:01.296264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.814 #42 NEW cov: 11875 ft: 14947 corp: 17/552b lim: 50 exec/s: 42 rss: 69Mb L: 12/46 MS: 4 CrossOver-ChangeBit-CrossOver-CMP- DE: "\000\000\000\037"- 00:08:04.814 [2024-11-17 23:08:01.347160] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.814 [2024-11-17 23:08:01.347189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.347310] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.814 [2024-11-17 23:08:01.347330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.347446] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.814 [2024-11-17 23:08:01.347469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.347585] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.814 [2024-11-17 23:08:01.347606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.814 #43 NEW cov: 11875 ft: 14948 corp: 18/601b lim: 50 exec/s: 43 rss: 69Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:08:04.814 [2024-11-17 23:08:01.387267] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.814 [2024-11-17 23:08:01.387295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.387388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:04.814 [2024-11-17 23:08:01.387410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.387524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:04.814 [2024-11-17 23:08:01.387548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:04.814 [2024-11-17 23:08:01.387674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:04.814 [2024-11-17 23:08:01.387693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:04.814 #44 NEW cov: 11875 ft: 14973 corp: 19/648b lim: 50 exec/s: 44 rss: 69Mb L: 47/49 MS: 1 CopyPart- 00:08:04.814 [2024-11-17 23:08:01.426631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:04.814 [2024-11-17 23:08:01.426661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.074 #45 NEW cov: 11875 ft: 14992 corp: 20/660b lim: 50 exec/s: 45 rss: 70Mb L: 12/49 MS: 1 ChangeByte- 00:08:05.074 [2024-11-17 23:08:01.467535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.074 [2024-11-17 23:08:01.467569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.074 [2024-11-17 23:08:01.467684] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.074 [2024-11-17 23:08:01.467719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.074 [2024-11-17 23:08:01.467837] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.074 [2024-11-17 23:08:01.467857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.074 [2024-11-17 23:08:01.467972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.074 [2024-11-17 23:08:01.467993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.074 #46 NEW cov: 11875 ft: 15023 corp: 21/708b lim: 50 exec/s: 46 rss: 70Mb L: 48/49 MS: 1 CrossOver- 00:08:05.074 [2024-11-17 23:08:01.507339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.074 [2024-11-17 23:08:01.507367] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.074 [2024-11-17 23:08:01.507470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.074 [2024-11-17 23:08:01.507491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.074 [2024-11-17 23:08:01.507620] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.074 [2024-11-17 23:08:01.507641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.074 #47 NEW cov: 11875 ft: 15034 corp: 22/742b lim: 50 exec/s: 47 rss: 70Mb L: 34/49 MS: 1 PersAutoDict- DE: "\000\000\000\037"- 00:08:05.075 [2024-11-17 23:08:01.547488] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.075 [2024-11-17 23:08:01.547520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.547626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.075 [2024-11-17 23:08:01.547650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.547774] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.075 [2024-11-17 23:08:01.547795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.075 #48 NEW cov: 11875 ft: 15036 corp: 23/775b lim: 50 exec/s: 48 rss: 70Mb L: 33/49 MS: 1 EraseBytes- 00:08:05.075 [2024-11-17 23:08:01.587778] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.075 [2024-11-17 23:08:01.587810] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.587903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.075 [2024-11-17 23:08:01.587929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.588043] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.075 [2024-11-17 23:08:01.588065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.588184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.075 [2024-11-17 23:08:01.588210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.075 #49 NEW cov: 11875 ft: 15070 corp: 24/815b lim: 50 exec/s: 49 rss: 70Mb L: 40/49 MS: 1 EraseBytes- 00:08:05.075 [2024-11-17 23:08:01.628005] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.075 [2024-11-17 23:08:01.628033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.628147] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.075 [2024-11-17 23:08:01.628168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.628275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.075 [2024-11-17 23:08:01.628295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.628422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.075 [2024-11-17 23:08:01.628445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.075 #50 NEW cov: 11875 ft: 15086 corp: 25/863b lim: 50 exec/s: 50 rss: 70Mb L: 48/49 MS: 1 ChangeByte- 00:08:05.075 [2024-11-17 23:08:01.668004] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.075 [2024-11-17 23:08:01.668036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.668129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.075 [2024-11-17 23:08:01.668152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.075 [2024-11-17 23:08:01.668275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.075 [2024-11-17 23:08:01.668299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.335 #51 NEW cov: 11875 ft: 15118 corp: 26/893b lim: 50 exec/s: 51 rss: 70Mb L: 30/49 MS: 1 CrossOver- 00:08:05.335 [2024-11-17 23:08:01.708176] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.335 [2024-11-17 23:08:01.708206] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.708311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.335 [2024-11-17 23:08:01.708331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.708458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.335 [2024-11-17 23:08:01.708478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.708632] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.335 [2024-11-17 23:08:01.708654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.335 #52 NEW cov: 11875 ft: 15124 corp: 27/939b lim: 50 exec/s: 52 rss: 70Mb L: 46/49 MS: 1 ChangeBinInt- 00:08:05.335 [2024-11-17 23:08:01.748084] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.335 [2024-11-17 23:08:01.748115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.748238] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.335 [2024-11-17 23:08:01.748259] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.748380] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.335 [2024-11-17 23:08:01.748404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.335 #53 NEW cov: 11875 ft: 15131 corp: 28/971b lim: 50 exec/s: 53 rss: 70Mb L: 32/49 MS: 1 EraseBytes- 00:08:05.335 [2024-11-17 23:08:01.787976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.335 [2024-11-17 23:08:01.788008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.788132] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.335 [2024-11-17 23:08:01.788155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.335 #54 NEW cov: 11875 ft: 15132 corp: 29/999b lim: 50 exec/s: 54 rss: 70Mb L: 28/49 MS: 1 ChangeBit- 00:08:05.335 [2024-11-17 23:08:01.838577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.335 [2024-11-17 23:08:01.838607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.838706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.335 [2024-11-17 23:08:01.838731] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.838850] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.335 [2024-11-17 23:08:01.838873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.838996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.335 [2024-11-17 23:08:01.839018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.335 #55 NEW cov: 11875 ft: 15166 corp: 30/1039b lim: 50 exec/s: 55 rss: 70Mb L: 40/49 MS: 1 ChangeByte- 00:08:05.335 [2024-11-17 23:08:01.888497] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.335 [2024-11-17 23:08:01.888537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.888653] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.335 [2024-11-17 23:08:01.888678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.888804] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.335 [2024-11-17 23:08:01.888827] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.335 #56 NEW cov: 11875 ft: 15175 corp: 31/1069b lim: 50 exec/s: 56 rss: 70Mb L: 30/49 MS: 1 ShuffleBytes- 00:08:05.335 [2024-11-17 23:08:01.928865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.335 [2024-11-17 23:08:01.928896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.928991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.335 [2024-11-17 23:08:01.929015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.929135] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.335 [2024-11-17 23:08:01.929158] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.335 [2024-11-17 23:08:01.929284] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.335 [2024-11-17 23:08:01.929308] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.595 #57 NEW cov: 11875 ft: 15189 corp: 32/1117b lim: 50 exec/s: 57 rss: 70Mb L: 48/49 MS: 1 ChangeBit- 00:08:05.595 [2024-11-17 23:08:01.978749] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.595 [2024-11-17 23:08:01.978780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:01.978898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.595 [2024-11-17 23:08:01.978920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:01.979046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.595 [2024-11-17 23:08:01.979069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.595 #58 NEW cov: 11875 ft: 15202 corp: 33/1149b lim: 50 exec/s: 58 rss: 70Mb L: 32/49 MS: 1 CopyPart- 00:08:05.595 [2024-11-17 23:08:02.029302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.595 [2024-11-17 23:08:02.029334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.029450] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.595 [2024-11-17 23:08:02.029484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.029600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.595 [2024-11-17 23:08:02.029623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.029753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.595 [2024-11-17 23:08:02.029779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.595 #59 NEW cov: 11875 ft: 15207 corp: 34/1196b lim: 50 exec/s: 59 rss: 70Mb L: 47/49 MS: 1 InsertRepeatedBytes- 00:08:05.595 [2024-11-17 23:08:02.069125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.595 [2024-11-17 23:08:02.069157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.069261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.595 [2024-11-17 23:08:02.069284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.069406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.595 [2024-11-17 23:08:02.069428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.069560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.595 [2024-11-17 23:08:02.069583] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.595 #65 NEW cov: 11875 ft: 15214 corp: 35/1240b lim: 50 exec/s: 65 rss: 70Mb L: 44/49 MS: 1 InsertRepeatedBytes- 00:08:05.595 [2024-11-17 23:08:02.119471] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.595 [2024-11-17 23:08:02.119502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.119588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.595 [2024-11-17 23:08:02.119614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.119732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.595 [2024-11-17 23:08:02.119753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.119883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.595 [2024-11-17 23:08:02.119903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.595 #66 NEW cov: 11875 ft: 15286 corp: 36/1288b lim: 50 exec/s: 66 rss: 70Mb L: 48/49 MS: 1 ChangeByte- 00:08:05.595 [2024-11-17 23:08:02.169491] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.595 [2024-11-17 23:08:02.169526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.169644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.595 [2024-11-17 23:08:02.169668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.169788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.595 [2024-11-17 23:08:02.169812] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.595 [2024-11-17 23:08:02.169932] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:08:05.595 [2024-11-17 23:08:02.169955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:05.595 #67 NEW cov: 11875 ft: 15320 corp: 37/1330b lim: 50 exec/s: 67 rss: 70Mb L: 42/49 MS: 1 ShuffleBytes- 00:08:05.856 [2024-11-17 23:08:02.209246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.856 [2024-11-17 23:08:02.209272] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.856 [2024-11-17 23:08:02.209396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.856 [2024-11-17 23:08:02.209420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.856 #68 NEW cov: 11875 ft: 15337 corp: 38/1353b lim: 50 exec/s: 68 rss: 70Mb L: 23/49 MS: 1 CopyPart- 00:08:05.856 [2024-11-17 23:08:02.259286] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.856 [2024-11-17 23:08:02.259316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.856 [2024-11-17 23:08:02.259428] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.856 [2024-11-17 23:08:02.259450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.856 [2024-11-17 23:08:02.259586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.856 [2024-11-17 23:08:02.259612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.856 #69 NEW cov: 11875 ft: 15341 corp: 39/1385b lim: 50 exec/s: 69 rss: 70Mb L: 32/49 MS: 1 CopyPart- 00:08:05.856 [2024-11-17 23:08:02.309494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:08:05.856 [2024-11-17 23:08:02.309522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:05.856 [2024-11-17 23:08:02.309660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:08:05.856 [2024-11-17 23:08:02.309687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:05.856 [2024-11-17 23:08:02.309819] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:08:05.856 [2024-11-17 23:08:02.309841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:05.856 #70 NEW cov: 11875 ft: 15357 corp: 40/1417b lim: 50 exec/s: 35 rss: 70Mb L: 32/49 MS: 1 PersAutoDict- DE: "\017\000\000\000"- 00:08:05.856 #70 DONE cov: 11875 ft: 15357 corp: 40/1417b lim: 50 exec/s: 35 rss: 70Mb 00:08:05.856 ###### Recommended dictionary. ###### 00:08:05.856 "\017\000\000\000" # Uses: 1 00:08:05.856 "\000\000\000\037" # Uses: 1 00:08:05.856 ###### End of recommended dictionary. ###### 00:08:05.856 Done 70 runs in 2 second(s) 00:08:05.856 23:08:02 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_21.conf 00:08:05.856 23:08:02 -- ../common.sh@72 -- # (( i++ )) 00:08:05.856 23:08:02 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:05.856 23:08:02 -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:08:05.856 23:08:02 -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:08:05.856 23:08:02 -- nvmf/run.sh@24 -- # local timen=1 00:08:05.856 23:08:02 -- nvmf/run.sh@25 -- # local core=0x1 00:08:05.856 23:08:02 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:05.856 23:08:02 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:08:05.856 23:08:02 -- nvmf/run.sh@29 -- # printf %02d 22 00:08:05.856 23:08:02 -- nvmf/run.sh@29 -- # port=4422 00:08:05.856 23:08:02 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:05.856 23:08:02 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:08:05.856 23:08:02 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:05.856 23:08:02 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 -r /var/tmp/spdk22.sock 00:08:06.116 [2024-11-17 23:08:02.484099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:06.116 [2024-11-17 23:08:02.484167] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306703 ] 00:08:06.116 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.116 [2024-11-17 23:08:02.659991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.116 [2024-11-17 23:08:02.723066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:06.116 [2024-11-17 23:08:02.723191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.375 [2024-11-17 23:08:02.781134] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.375 [2024-11-17 23:08:02.797435] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:08:06.375 INFO: Running with entropic power schedule (0xFF, 100). 00:08:06.375 INFO: Seed: 3476621888 00:08:06.375 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:06.375 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:06.375 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:08:06.375 INFO: A corpus is not provided, starting from an empty corpus 00:08:06.375 #2 INITED exec/s: 0 rss: 60Mb 00:08:06.375 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:06.375 This may also happen if the target rejected all inputs we tried so far 00:08:06.375 [2024-11-17 23:08:02.842459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.375 [2024-11-17 23:08:02.842491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.635 NEW_FUNC[1/672]: 0x461ed8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:08:06.635 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:06.635 #12 NEW cov: 11663 ft: 11664 corp: 2/32b lim: 85 exec/s: 0 rss: 68Mb L: 31/31 MS: 5 CopyPart-CrossOver-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:08:06.635 [2024-11-17 23:08:03.153295] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.635 [2024-11-17 23:08:03.153326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.635 #18 NEW cov: 11778 ft: 12196 corp: 3/63b lim: 85 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 CMP- DE: "\030\000\000\000"- 00:08:06.635 [2024-11-17 23:08:03.203378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.635 [2024-11-17 23:08:03.203407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.635 #24 NEW cov: 11784 ft: 12519 corp: 4/94b lim: 85 exec/s: 0 rss: 68Mb L: 31/31 MS: 1 PersAutoDict- DE: "\030\000\000\000"- 00:08:06.635 [2024-11-17 23:08:03.243507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.635 [2024-11-17 23:08:03.243539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 #25 NEW cov: 11869 ft: 12852 corp: 5/126b lim: 85 exec/s: 0 rss: 68Mb L: 32/32 MS: 1 InsertByte- 00:08:06.894 [2024-11-17 23:08:03.283609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.894 [2024-11-17 23:08:03.283638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 #26 NEW cov: 11869 ft: 12960 corp: 6/158b lim: 85 exec/s: 0 rss: 68Mb L: 32/32 MS: 1 CrossOver- 00:08:06.894 [2024-11-17 23:08:03.323908] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.894 [2024-11-17 23:08:03.323937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 [2024-11-17 23:08:03.323992] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:06.894 [2024-11-17 23:08:03.324010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.894 #36 NEW cov: 11869 ft: 13778 corp: 7/192b lim: 85 exec/s: 0 rss: 68Mb L: 34/34 MS: 5 ShuffleBytes-InsertByte-ChangeBit-InsertByte-CrossOver- 00:08:06.894 [2024-11-17 23:08:03.364190] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.894 [2024-11-17 23:08:03.364219] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 [2024-11-17 23:08:03.364279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:06.894 [2024-11-17 23:08:03.364296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.894 #37 NEW cov: 11878 ft: 13869 corp: 8/229b lim: 85 exec/s: 0 rss: 68Mb L: 37/37 MS: 1 CopyPart- 00:08:06.894 [2024-11-17 23:08:03.403973] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.894 [2024-11-17 23:08:03.404002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 #38 NEW cov: 11878 ft: 13923 corp: 9/261b lim: 85 exec/s: 0 rss: 68Mb L: 32/37 MS: 1 PersAutoDict- DE: "\030\000\000\000"- 00:08:06.894 [2024-11-17 23:08:03.444072] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.894 [2024-11-17 23:08:03.444100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 #39 NEW cov: 11878 ft: 13967 corp: 10/293b lim: 85 exec/s: 0 rss: 68Mb L: 32/37 MS: 1 CopyPart- 00:08:06.894 [2024-11-17 23:08:03.484302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:06.894 [2024-11-17 23:08:03.484331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:06.894 [2024-11-17 23:08:03.484385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:06.894 [2024-11-17 23:08:03.484402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:06.894 #41 NEW cov: 11878 ft: 14038 corp: 11/328b lim: 85 exec/s: 0 rss: 68Mb L: 35/37 MS: 2 ChangeBit-CrossOver- 00:08:07.154 [2024-11-17 23:08:03.524418] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.524446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.154 [2024-11-17 23:08:03.524495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.154 [2024-11-17 23:08:03.524511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.154 #42 NEW cov: 11878 ft: 14150 corp: 12/362b lim: 85 exec/s: 0 rss: 68Mb L: 34/37 MS: 1 PersAutoDict- DE: "\030\000\000\000"- 00:08:07.154 [2024-11-17 23:08:03.564590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.564618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.154 [2024-11-17 23:08:03.564670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.154 [2024-11-17 23:08:03.564686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.154 #43 NEW cov: 11878 ft: 14195 corp: 13/399b lim: 85 exec/s: 0 rss: 69Mb L: 37/37 MS: 1 ChangeByte- 00:08:07.154 [2024-11-17 23:08:03.604526] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.604560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.154 #44 NEW cov: 11878 ft: 14210 corp: 14/431b lim: 85 exec/s: 0 rss: 69Mb L: 32/37 MS: 1 CrossOver- 00:08:07.154 [2024-11-17 23:08:03.644674] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.644703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.154 #45 NEW cov: 11878 ft: 14218 corp: 15/463b lim: 85 exec/s: 0 rss: 69Mb L: 32/37 MS: 1 InsertByte- 00:08:07.154 [2024-11-17 23:08:03.674861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.674888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.154 [2024-11-17 23:08:03.674945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.154 [2024-11-17 23:08:03.674960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.154 #46 NEW cov: 11878 ft: 14269 corp: 16/501b lim: 85 exec/s: 0 rss: 69Mb L: 38/38 MS: 1 InsertByte- 00:08:07.154 [2024-11-17 23:08:03.714865] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.714892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.154 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:07.154 #47 NEW cov: 11901 ft: 14301 corp: 17/533b lim: 85 exec/s: 0 rss: 69Mb L: 32/38 MS: 1 ShuffleBytes- 00:08:07.154 [2024-11-17 23:08:03.754984] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.154 [2024-11-17 23:08:03.755011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 #48 NEW cov: 11901 ft: 14329 corp: 18/565b lim: 85 exec/s: 0 rss: 69Mb L: 32/38 MS: 1 ShuffleBytes- 00:08:07.414 [2024-11-17 23:08:03.795247] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:03.795274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 [2024-11-17 23:08:03.795327] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.414 [2024-11-17 23:08:03.795344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.414 #49 NEW cov: 11901 ft: 14370 corp: 19/600b lim: 85 exec/s: 0 rss: 69Mb L: 35/38 MS: 1 ChangeByte- 00:08:07.414 [2024-11-17 23:08:03.835378] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:03.835405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 [2024-11-17 23:08:03.835462] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.414 [2024-11-17 23:08:03.835478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.414 #50 NEW cov: 11901 ft: 14386 corp: 20/650b lim: 85 exec/s: 50 rss: 69Mb L: 50/50 MS: 1 CrossOver- 00:08:07.414 [2024-11-17 23:08:03.875495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:03.875522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 [2024-11-17 23:08:03.875551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.414 [2024-11-17 23:08:03.875562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.414 [2024-11-17 23:08:03.915567] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:03.915594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 [2024-11-17 23:08:03.915631] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.414 [2024-11-17 23:08:03.915650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.414 #52 NEW cov: 11901 ft: 14399 corp: 21/700b lim: 85 exec/s: 52 rss: 69Mb L: 50/50 MS: 2 CrossOver-CopyPart- 00:08:07.414 [2024-11-17 23:08:03.955550] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:03.955578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 #53 NEW cov: 11901 ft: 14446 corp: 22/731b lim: 85 exec/s: 53 rss: 69Mb L: 31/50 MS: 1 CopyPart- 00:08:07.414 [2024-11-17 23:08:03.995644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:03.995672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 #54 NEW cov: 11901 ft: 14489 corp: 23/762b lim: 85 exec/s: 54 rss: 69Mb L: 31/50 MS: 1 ChangeBit- 00:08:07.414 [2024-11-17 23:08:04.025909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.414 [2024-11-17 23:08:04.025937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.414 [2024-11-17 23:08:04.025988] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.415 [2024-11-17 23:08:04.026004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.674 #55 NEW cov: 11901 ft: 14532 corp: 24/797b lim: 85 exec/s: 55 rss: 69Mb L: 35/50 MS: 1 ChangeBinInt- 00:08:07.674 [2024-11-17 23:08:04.065868] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.065895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 #56 NEW cov: 11901 ft: 14540 corp: 25/830b lim: 85 exec/s: 56 rss: 69Mb L: 33/50 MS: 1 InsertByte- 00:08:07.674 [2024-11-17 23:08:04.105964] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.105992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 #57 NEW cov: 11901 ft: 14615 corp: 26/862b lim: 85 exec/s: 57 rss: 69Mb L: 32/50 MS: 1 ChangeBinInt- 00:08:07.674 [2024-11-17 23:08:04.146079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.146106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 #58 NEW cov: 11901 ft: 14635 corp: 27/895b lim: 85 exec/s: 58 rss: 69Mb L: 33/50 MS: 1 InsertByte- 00:08:07.674 [2024-11-17 23:08:04.186220] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.186248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 #64 NEW cov: 11901 ft: 14643 corp: 28/927b lim: 85 exec/s: 64 rss: 69Mb L: 32/50 MS: 1 ShuffleBytes- 00:08:07.674 [2024-11-17 23:08:04.216312] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.216340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 #75 NEW cov: 11901 ft: 14651 corp: 29/959b lim: 85 exec/s: 75 rss: 69Mb L: 32/50 MS: 1 ChangeBit- 00:08:07.674 [2024-11-17 23:08:04.246581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.246608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 [2024-11-17 23:08:04.246662] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.674 [2024-11-17 23:08:04.246682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.674 #76 NEW cov: 11901 ft: 14664 corp: 30/993b lim: 85 exec/s: 76 rss: 69Mb L: 34/50 MS: 1 ChangeBinInt- 00:08:07.674 [2024-11-17 23:08:04.286701] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.674 [2024-11-17 23:08:04.286728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.674 [2024-11-17 23:08:04.286784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.674 [2024-11-17 23:08:04.286799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.935 [2024-11-17 23:08:04.316782] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.316809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.935 [2024-11-17 23:08:04.316852] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.935 [2024-11-17 23:08:04.316868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.935 #78 NEW cov: 11901 ft: 14694 corp: 31/1030b lim: 85 exec/s: 78 rss: 70Mb L: 37/50 MS: 2 ChangeByte-ChangeByte- 00:08:07.935 [2024-11-17 23:08:04.356765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.356794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.935 #79 NEW cov: 11901 ft: 14732 corp: 32/1062b lim: 85 exec/s: 79 rss: 70Mb L: 32/50 MS: 1 InsertByte- 00:08:07.935 [2024-11-17 23:08:04.386828] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.386857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.935 #80 NEW cov: 11901 ft: 14738 corp: 33/1084b lim: 85 exec/s: 80 rss: 70Mb L: 22/50 MS: 1 EraseBytes- 00:08:07.935 [2024-11-17 23:08:04.426946] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.426974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.935 #81 NEW cov: 11901 ft: 14770 corp: 34/1104b lim: 85 exec/s: 81 rss: 70Mb L: 20/50 MS: 1 EraseBytes- 00:08:07.935 [2024-11-17 23:08:04.467266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.467294] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.935 [2024-11-17 23:08:04.467347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:07.935 [2024-11-17 23:08:04.467363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:07.935 #82 NEW cov: 11901 ft: 14773 corp: 35/1141b lim: 85 exec/s: 82 rss: 70Mb L: 37/50 MS: 1 CrossOver- 00:08:07.935 [2024-11-17 23:08:04.507196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.507224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:07.935 #83 NEW cov: 11901 ft: 14782 corp: 36/1171b lim: 85 exec/s: 83 rss: 70Mb L: 30/50 MS: 1 EraseBytes- 00:08:07.935 [2024-11-17 23:08:04.537298] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:07.935 [2024-11-17 23:08:04.537328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.195 #84 NEW cov: 11901 ft: 14816 corp: 37/1202b lim: 85 exec/s: 84 rss: 70Mb L: 31/50 MS: 1 CopyPart- 00:08:08.195 [2024-11-17 23:08:04.577374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.195 [2024-11-17 23:08:04.577402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.195 #85 NEW cov: 11901 ft: 14852 corp: 38/1234b lim: 85 exec/s: 85 rss: 70Mb L: 32/50 MS: 1 ShuffleBytes- 00:08:08.195 [2024-11-17 23:08:04.617693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.195 [2024-11-17 23:08:04.617720] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.195 [2024-11-17 23:08:04.617764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.195 [2024-11-17 23:08:04.617779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.195 #86 NEW cov: 11901 ft: 14861 corp: 39/1278b lim: 85 exec/s: 86 rss: 70Mb L: 44/50 MS: 1 EraseBytes- 00:08:08.195 [2024-11-17 23:08:04.657672] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.195 [2024-11-17 23:08:04.657700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.195 #87 NEW cov: 11901 ft: 14867 corp: 40/1301b lim: 85 exec/s: 87 rss: 70Mb L: 23/50 MS: 1 EraseBytes- 00:08:08.195 [2024-11-17 23:08:04.697728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.196 [2024-11-17 23:08:04.697756] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.196 #88 NEW cov: 11901 ft: 14873 corp: 41/1333b lim: 85 exec/s: 88 rss: 70Mb L: 32/50 MS: 1 CMP- DE: "\011\000\000\000"- 00:08:08.196 [2024-11-17 23:08:04.728152] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.196 [2024-11-17 23:08:04.728180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.196 [2024-11-17 23:08:04.728218] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:08:08.196 [2024-11-17 23:08:04.728234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:08.196 [2024-11-17 23:08:04.728290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:08:08.196 [2024-11-17 23:08:04.728306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:08.196 #89 NEW cov: 11901 ft: 15197 corp: 42/1396b lim: 85 exec/s: 89 rss: 70Mb L: 63/63 MS: 1 InsertRepeatedBytes- 00:08:08.196 [2024-11-17 23:08:04.778013] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.196 [2024-11-17 23:08:04.778041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.196 #90 NEW cov: 11901 ft: 15212 corp: 43/1426b lim: 85 exec/s: 90 rss: 70Mb L: 30/63 MS: 1 ShuffleBytes- 00:08:08.455 [2024-11-17 23:08:04.818123] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:08:08.455 [2024-11-17 23:08:04.818152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.455 #91 NEW cov: 11901 ft: 15219 corp: 44/1457b lim: 85 exec/s: 45 rss: 70Mb L: 31/63 MS: 1 ShuffleBytes- 00:08:08.455 #91 DONE cov: 11901 ft: 15219 corp: 44/1457b lim: 85 exec/s: 45 rss: 70Mb 00:08:08.455 ###### Recommended dictionary. ###### 00:08:08.455 "\030\000\000\000" # Uses: 4 00:08:08.455 "\011\000\000\000" # Uses: 0 00:08:08.455 ###### End of recommended dictionary. ###### 00:08:08.455 Done 91 runs in 2 second(s) 00:08:08.455 23:08:04 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_22.conf 00:08:08.455 23:08:04 -- ../common.sh@72 -- # (( i++ )) 00:08:08.455 23:08:04 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:08.455 23:08:04 -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:08:08.455 23:08:04 -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:08:08.455 23:08:04 -- nvmf/run.sh@24 -- # local timen=1 00:08:08.455 23:08:04 -- nvmf/run.sh@25 -- # local core=0x1 00:08:08.455 23:08:04 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:08.455 23:08:04 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:08:08.455 23:08:04 -- nvmf/run.sh@29 -- # printf %02d 23 00:08:08.455 23:08:04 -- nvmf/run.sh@29 -- # port=4423 00:08:08.455 23:08:04 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:08.455 23:08:04 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:08:08.455 23:08:04 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:08.455 23:08:04 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 -r /var/tmp/spdk23.sock 00:08:08.455 [2024-11-17 23:08:04.992122] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.455 [2024-11-17 23:08:04.992189] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307073 ] 00:08:08.455 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.715 [2024-11-17 23:08:05.174399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.715 [2024-11-17 23:08:05.237882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.715 [2024-11-17 23:08:05.238010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.715 [2024-11-17 23:08:05.296226] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.715 [2024-11-17 23:08:05.312553] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:08:08.715 INFO: Running with entropic power schedule (0xFF, 100). 00:08:08.715 INFO: Seed: 1697635250 00:08:08.976 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:08.976 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:08.976 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:08:08.976 INFO: A corpus is not provided, starting from an empty corpus 00:08:08.976 #2 INITED exec/s: 0 rss: 60Mb 00:08:08.976 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:08.976 This may also happen if the target rejected all inputs we tried so far 00:08:08.976 [2024-11-17 23:08:05.357695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:08.976 [2024-11-17 23:08:05.357727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:08.976 [2024-11-17 23:08:05.357784] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:08.976 [2024-11-17 23:08:05.357800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.238 NEW_FUNC[1/671]: 0x465118 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:08:09.238 NEW_FUNC[2/671]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:09.238 #3 NEW cov: 11598 ft: 11599 corp: 2/11b lim: 25 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:08:09.238 [2024-11-17 23:08:05.658441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.238 [2024-11-17 23:08:05.658474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.238 [2024-11-17 23:08:05.658530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.238 [2024-11-17 23:08:05.658551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.238 #9 NEW cov: 11711 ft: 12018 corp: 3/21b lim: 25 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 ChangeByte- 00:08:09.238 [2024-11-17 23:08:05.698530] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.238 [2024-11-17 23:08:05.698565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.238 [2024-11-17 23:08:05.698610] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.238 [2024-11-17 23:08:05.698625] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.238 #15 NEW cov: 11717 ft: 12393 corp: 4/31b lim: 25 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 ChangeByte- 00:08:09.238 [2024-11-17 23:08:05.738661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.238 [2024-11-17 23:08:05.738691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.238 [2024-11-17 23:08:05.738750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.238 [2024-11-17 23:08:05.738766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.238 #16 NEW cov: 11802 ft: 12708 corp: 5/44b lim: 25 exec/s: 0 rss: 68Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:08:09.238 [2024-11-17 23:08:05.778755] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.238 [2024-11-17 23:08:05.778781] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.238 [2024-11-17 23:08:05.778820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.238 [2024-11-17 23:08:05.778834] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.238 #17 NEW cov: 11802 ft: 12802 corp: 6/55b lim: 25 exec/s: 0 rss: 68Mb L: 11/13 MS: 1 CrossOver- 00:08:09.238 [2024-11-17 23:08:05.808820] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.238 [2024-11-17 23:08:05.808848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.238 [2024-11-17 23:08:05.808894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.238 [2024-11-17 23:08:05.808908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.238 #18 NEW cov: 11802 ft: 12977 corp: 7/69b lim: 25 exec/s: 0 rss: 68Mb L: 14/14 MS: 1 InsertByte- 00:08:09.238 [2024-11-17 23:08:05.848937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.238 [2024-11-17 23:08:05.848965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.238 [2024-11-17 23:08:05.849007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.238 [2024-11-17 23:08:05.849023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 #19 NEW cov: 11802 ft: 13034 corp: 8/79b lim: 25 exec/s: 0 rss: 68Mb L: 10/14 MS: 1 CopyPart- 00:08:09.498 [2024-11-17 23:08:05.889290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.498 [2024-11-17 23:08:05.889317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:05.889356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.498 [2024-11-17 23:08:05.889372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:05.889427] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:09.498 [2024-11-17 23:08:05.889442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:05.889498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:09.498 [2024-11-17 23:08:05.889514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.498 #20 NEW cov: 11802 ft: 13581 corp: 9/100b lim: 25 exec/s: 0 rss: 68Mb L: 21/21 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\377"- 00:08:09.498 [2024-11-17 23:08:05.929144] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.498 [2024-11-17 23:08:05.929172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:05.929211] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.498 [2024-11-17 23:08:05.929226] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 #21 NEW cov: 11802 ft: 13641 corp: 10/110b lim: 25 exec/s: 0 rss: 68Mb L: 10/21 MS: 1 ChangeByte- 00:08:09.498 [2024-11-17 23:08:05.969289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.498 [2024-11-17 23:08:05.969315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:05.969359] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.498 [2024-11-17 23:08:05.969374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 #22 NEW cov: 11802 ft: 13726 corp: 11/124b lim: 25 exec/s: 0 rss: 69Mb L: 14/21 MS: 1 CopyPart- 00:08:09.498 [2024-11-17 23:08:06.009436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.498 [2024-11-17 23:08:06.009463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:06.009518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.498 [2024-11-17 23:08:06.009540] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 #23 NEW cov: 11802 ft: 13753 corp: 12/138b lim: 25 exec/s: 0 rss: 69Mb L: 14/21 MS: 1 ShuffleBytes- 00:08:09.498 [2024-11-17 23:08:06.049518] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.498 [2024-11-17 23:08:06.049550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:06.049590] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.498 [2024-11-17 23:08:06.049606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 #29 NEW cov: 11802 ft: 13807 corp: 13/149b lim: 25 exec/s: 0 rss: 69Mb L: 11/21 MS: 1 InsertByte- 00:08:09.498 [2024-11-17 23:08:06.089738] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.498 [2024-11-17 23:08:06.089764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:06.089805] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.498 [2024-11-17 23:08:06.089821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.498 [2024-11-17 23:08:06.089877] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:09.498 [2024-11-17 23:08:06.089891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.757 #30 NEW cov: 11802 ft: 14023 corp: 14/166b lim: 25 exec/s: 0 rss: 69Mb L: 17/21 MS: 1 InsertRepeatedBytes- 00:08:09.757 [2024-11-17 23:08:06.129709] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.757 [2024-11-17 23:08:06.129738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.129788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.758 [2024-11-17 23:08:06.129803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.758 #31 NEW cov: 11802 ft: 14035 corp: 15/180b lim: 25 exec/s: 0 rss: 69Mb L: 14/21 MS: 1 ShuffleBytes- 00:08:09.758 [2024-11-17 23:08:06.169727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.758 [2024-11-17 23:08:06.169755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.758 #32 NEW cov: 11802 ft: 14431 corp: 16/187b lim: 25 exec/s: 0 rss: 69Mb L: 7/21 MS: 1 EraseBytes- 00:08:09.758 [2024-11-17 23:08:06.209983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.758 [2024-11-17 23:08:06.210010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.210054] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.758 [2024-11-17 23:08:06.210069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.758 #33 NEW cov: 11802 ft: 14461 corp: 17/201b lim: 25 exec/s: 0 rss: 69Mb L: 14/21 MS: 1 ChangeByte- 00:08:09.758 [2024-11-17 23:08:06.250308] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.758 [2024-11-17 23:08:06.250336] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.250385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.758 [2024-11-17 23:08:06.250401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.250459] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:09.758 [2024-11-17 23:08:06.250473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.250529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:09.758 [2024-11-17 23:08:06.250548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:09.758 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:09.758 #34 NEW cov: 11825 ft: 14589 corp: 18/222b lim: 25 exec/s: 0 rss: 69Mb L: 21/21 MS: 1 ChangeBit- 00:08:09.758 [2024-11-17 23:08:06.300275] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.758 [2024-11-17 23:08:06.300302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.300356] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.758 [2024-11-17 23:08:06.300372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.758 #35 NEW cov: 11825 ft: 14622 corp: 19/233b lim: 25 exec/s: 0 rss: 69Mb L: 11/21 MS: 1 EraseBytes- 00:08:09.758 [2024-11-17 23:08:06.340388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:09.758 [2024-11-17 23:08:06.340416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:09.758 [2024-11-17 23:08:06.340470] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:09.758 [2024-11-17 23:08:06.340486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:09.758 #36 NEW cov: 11825 ft: 14653 corp: 20/247b lim: 25 exec/s: 36 rss: 69Mb L: 14/21 MS: 1 InsertByte- 00:08:10.018 [2024-11-17 23:08:06.380374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.380403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 #37 NEW cov: 11825 ft: 14675 corp: 21/254b lim: 25 exec/s: 37 rss: 69Mb L: 7/21 MS: 1 ShuffleBytes- 00:08:10.018 [2024-11-17 23:08:06.420645] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.420674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.420719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.018 [2024-11-17 23:08:06.420734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.018 #38 NEW cov: 11825 ft: 14701 corp: 22/265b lim: 25 exec/s: 38 rss: 69Mb L: 11/21 MS: 1 ChangeByte- 00:08:10.018 [2024-11-17 23:08:06.460754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.460782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.460821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.018 [2024-11-17 23:08:06.460836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.018 #39 NEW cov: 11825 ft: 14716 corp: 23/278b lim: 25 exec/s: 39 rss: 69Mb L: 13/21 MS: 1 CopyPart- 00:08:10.018 [2024-11-17 23:08:06.490822] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.490851] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.490905] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.018 [2024-11-17 23:08:06.490920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.018 #40 NEW cov: 11825 ft: 14752 corp: 24/288b lim: 25 exec/s: 40 rss: 69Mb L: 10/21 MS: 1 CMP- DE: "\000\000\000\000\377\377\377\377"- 00:08:10.018 [2024-11-17 23:08:06.531046] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.531078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.531122] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.018 [2024-11-17 23:08:06.531138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.531196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.018 [2024-11-17 23:08:06.531211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.018 #41 NEW cov: 11825 ft: 14762 corp: 25/306b lim: 25 exec/s: 41 rss: 69Mb L: 18/21 MS: 1 InsertRepeatedBytes- 00:08:10.018 [2024-11-17 23:08:06.571082] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.571110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.571153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.018 [2024-11-17 23:08:06.571168] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.018 #42 NEW cov: 11825 ft: 14775 corp: 26/317b lim: 25 exec/s: 42 rss: 69Mb L: 11/21 MS: 1 ChangeBinInt- 00:08:10.018 [2024-11-17 23:08:06.611337] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.018 [2024-11-17 23:08:06.611363] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.611402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.018 [2024-11-17 23:08:06.611418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.018 [2024-11-17 23:08:06.611475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.018 [2024-11-17 23:08:06.611491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.278 #43 NEW cov: 11825 ft: 14793 corp: 27/335b lim: 25 exec/s: 43 rss: 69Mb L: 18/21 MS: 1 InsertRepeatedBytes- 00:08:10.278 [2024-11-17 23:08:06.651318] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.278 [2024-11-17 23:08:06.651346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.278 [2024-11-17 23:08:06.651402] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.278 [2024-11-17 23:08:06.651417] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.278 #44 NEW cov: 11825 ft: 14857 corp: 28/345b lim: 25 exec/s: 44 rss: 70Mb L: 10/21 MS: 1 ChangeByte- 00:08:10.278 [2024-11-17 23:08:06.691542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.278 [2024-11-17 23:08:06.691570] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.278 [2024-11-17 23:08:06.691609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.278 [2024-11-17 23:08:06.691626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.278 [2024-11-17 23:08:06.691682] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.278 [2024-11-17 23:08:06.691697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.278 #45 NEW cov: 11825 ft: 14922 corp: 29/360b lim: 25 exec/s: 45 rss: 70Mb L: 15/21 MS: 1 CrossOver- 00:08:10.278 [2024-11-17 23:08:06.731658] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.279 [2024-11-17 23:08:06.731686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.279 [2024-11-17 23:08:06.731724] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.279 [2024-11-17 23:08:06.731740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.279 [2024-11-17 23:08:06.731797] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.279 [2024-11-17 23:08:06.731813] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.279 #46 NEW cov: 11825 ft: 14930 corp: 30/378b lim: 25 exec/s: 46 rss: 70Mb L: 18/21 MS: 1 ChangeBinInt- 00:08:10.279 [2024-11-17 23:08:06.771514] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.279 [2024-11-17 23:08:06.771548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.279 #47 NEW cov: 11825 ft: 14966 corp: 31/384b lim: 25 exec/s: 47 rss: 70Mb L: 6/21 MS: 1 EraseBytes- 00:08:10.279 [2024-11-17 23:08:06.811885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.279 [2024-11-17 23:08:06.811912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.279 [2024-11-17 23:08:06.811949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.279 [2024-11-17 23:08:06.811966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.279 [2024-11-17 23:08:06.812022] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.279 [2024-11-17 23:08:06.812038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.279 #48 NEW cov: 11825 ft: 14972 corp: 32/402b lim: 25 exec/s: 48 rss: 70Mb L: 18/21 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\377"- 00:08:10.279 [2024-11-17 23:08:06.851903] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.279 [2024-11-17 23:08:06.851929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.279 [2024-11-17 23:08:06.851983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.279 [2024-11-17 23:08:06.851999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.279 #49 NEW cov: 11825 ft: 14985 corp: 33/416b lim: 25 exec/s: 49 rss: 70Mb L: 14/21 MS: 1 InsertByte- 00:08:10.279 [2024-11-17 23:08:06.881942] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.279 [2024-11-17 23:08:06.881969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.279 [2024-11-17 23:08:06.882025] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.279 [2024-11-17 23:08:06.882041] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.538 #50 NEW cov: 11825 ft: 15012 corp: 34/427b lim: 25 exec/s: 50 rss: 70Mb L: 11/21 MS: 1 ChangeBit- 00:08:10.538 [2024-11-17 23:08:06.922307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.538 [2024-11-17 23:08:06.922335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.538 [2024-11-17 23:08:06.922374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.538 [2024-11-17 23:08:06.922391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.538 [2024-11-17 23:08:06.922448] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.539 [2024-11-17 23:08:06.922463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.539 [2024-11-17 23:08:06.922519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:08:10.539 [2024-11-17 23:08:06.922538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:10.539 #51 NEW cov: 11825 ft: 15061 corp: 35/448b lim: 25 exec/s: 51 rss: 70Mb L: 21/21 MS: 1 ShuffleBytes- 00:08:10.539 [2024-11-17 23:08:06.962172] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.539 [2024-11-17 23:08:06.962198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.539 [2024-11-17 23:08:06.962256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.539 [2024-11-17 23:08:06.962271] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.539 #52 NEW cov: 11825 ft: 15075 corp: 36/459b lim: 25 exec/s: 52 rss: 70Mb L: 11/21 MS: 1 ChangeByte- 00:08:10.539 [2024-11-17 23:08:07.002236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.539 [2024-11-17 23:08:07.002263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.539 #53 NEW cov: 11825 ft: 15096 corp: 37/466b lim: 25 exec/s: 53 rss: 70Mb L: 7/21 MS: 1 CopyPart- 00:08:10.539 [2024-11-17 23:08:07.042475] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.539 [2024-11-17 23:08:07.042501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.539 [2024-11-17 23:08:07.042559] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.539 [2024-11-17 23:08:07.042574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.539 #54 NEW cov: 11825 ft: 15099 corp: 38/480b lim: 25 exec/s: 54 rss: 70Mb L: 14/21 MS: 1 ChangeByte- 00:08:10.539 [2024-11-17 23:08:07.072513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.539 [2024-11-17 23:08:07.072544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.539 [2024-11-17 23:08:07.072586] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.539 [2024-11-17 23:08:07.072602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.539 #55 NEW cov: 11825 ft: 15103 corp: 39/494b lim: 25 exec/s: 55 rss: 70Mb L: 14/21 MS: 1 ChangeBinInt- 00:08:10.539 [2024-11-17 23:08:07.102579] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.539 [2024-11-17 23:08:07.102606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.539 [2024-11-17 23:08:07.102651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.539 [2024-11-17 23:08:07.102667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.539 #56 NEW cov: 11825 ft: 15115 corp: 40/504b lim: 25 exec/s: 56 rss: 70Mb L: 10/21 MS: 1 ShuffleBytes- 00:08:10.539 [2024-11-17 23:08:07.142728] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.539 [2024-11-17 23:08:07.142755] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.539 [2024-11-17 23:08:07.142810] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.539 [2024-11-17 23:08:07.142826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.798 #57 NEW cov: 11825 ft: 15118 corp: 41/514b lim: 25 exec/s: 57 rss: 70Mb L: 10/21 MS: 1 ChangeBit- 00:08:10.798 [2024-11-17 23:08:07.172789] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.798 [2024-11-17 23:08:07.172815] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.798 [2024-11-17 23:08:07.172885] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.798 [2024-11-17 23:08:07.172901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.798 #58 NEW cov: 11825 ft: 15161 corp: 42/525b lim: 25 exec/s: 58 rss: 70Mb L: 11/21 MS: 1 ChangeBit- 00:08:10.798 [2024-11-17 23:08:07.212996] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.798 [2024-11-17 23:08:07.213022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.798 [2024-11-17 23:08:07.213066] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.798 [2024-11-17 23:08:07.213082] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.798 #59 NEW cov: 11825 ft: 15172 corp: 43/537b lim: 25 exec/s: 59 rss: 70Mb L: 12/21 MS: 1 InsertByte- 00:08:10.798 [2024-11-17 23:08:07.252931] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.798 [2024-11-17 23:08:07.252957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.798 #60 NEW cov: 11825 ft: 15179 corp: 44/545b lim: 25 exec/s: 60 rss: 70Mb L: 8/21 MS: 1 InsertByte- 00:08:10.798 [2024-11-17 23:08:07.293162] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.798 [2024-11-17 23:08:07.293188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.798 [2024-11-17 23:08:07.293244] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.798 [2024-11-17 23:08:07.293260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.798 #61 NEW cov: 11825 ft: 15194 corp: 45/559b lim: 25 exec/s: 61 rss: 70Mb L: 14/21 MS: 1 ChangeByte- 00:08:10.798 [2024-11-17 23:08:07.333375] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:08:10.798 [2024-11-17 23:08:07.333401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:10.798 [2024-11-17 23:08:07.333528] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:08:10.798 [2024-11-17 23:08:07.333543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:10.798 [2024-11-17 23:08:07.333560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:08:10.798 [2024-11-17 23:08:07.333574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:10.798 #62 NEW cov: 11825 ft: 15214 corp: 46/578b lim: 25 exec/s: 31 rss: 70Mb L: 19/21 MS: 1 CopyPart- 00:08:10.798 #62 DONE cov: 11825 ft: 15214 corp: 46/578b lim: 25 exec/s: 31 rss: 70Mb 00:08:10.798 ###### Recommended dictionary. ###### 00:08:10.798 "\377\377\377\377\377\377\377\377" # Uses: 1 00:08:10.798 "\000\000\000\000\377\377\377\377" # Uses: 0 00:08:10.798 ###### End of recommended dictionary. ###### 00:08:10.799 Done 62 runs in 2 second(s) 00:08:11.058 23:08:07 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_23.conf 00:08:11.058 23:08:07 -- ../common.sh@72 -- # (( i++ )) 00:08:11.058 23:08:07 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:11.058 23:08:07 -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:08:11.058 23:08:07 -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:08:11.058 23:08:07 -- nvmf/run.sh@24 -- # local timen=1 00:08:11.058 23:08:07 -- nvmf/run.sh@25 -- # local core=0x1 00:08:11.058 23:08:07 -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:11.058 23:08:07 -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:08:11.058 23:08:07 -- nvmf/run.sh@29 -- # printf %02d 24 00:08:11.058 23:08:07 -- nvmf/run.sh@29 -- # port=4424 00:08:11.058 23:08:07 -- nvmf/run.sh@30 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:11.058 23:08:07 -- nvmf/run.sh@32 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:08:11.058 23:08:07 -- nvmf/run.sh@33 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:08:11.058 23:08:07 -- nvmf/run.sh@36 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 -r /var/tmp/spdk24.sock 00:08:11.058 [2024-11-17 23:08:07.509210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.058 [2024-11-17 23:08:07.509278] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307533 ] 00:08:11.058 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.317 [2024-11-17 23:08:07.682499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.317 [2024-11-17 23:08:07.745112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.317 [2024-11-17 23:08:07.745253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.317 [2024-11-17 23:08:07.803086] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.317 [2024-11-17 23:08:07.819417] tcp.c: 954:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:08:11.317 INFO: Running with entropic power schedule (0xFF, 100). 00:08:11.317 INFO: Seed: 4203625361 00:08:11.317 INFO: Loaded 1 modules (344649 inline 8-bit counters): 344649 [0x2819f8c, 0x286e1d5), 00:08:11.317 INFO: Loaded 1 PC tables (344649 PCs): 344649 [0x286e1d8,0x2db0668), 00:08:11.317 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:08:11.317 INFO: A corpus is not provided, starting from an empty corpus 00:08:11.317 #2 INITED exec/s: 0 rss: 60Mb 00:08:11.317 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:11.317 This may also happen if the target rejected all inputs we tried so far 00:08:11.317 [2024-11-17 23:08:07.874732] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.317 [2024-11-17 23:08:07.874762] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.317 [2024-11-17 23:08:07.874820] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.317 [2024-11-17 23:08:07.874840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.577 NEW_FUNC[1/672]: 0x466208 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:08:11.577 NEW_FUNC[2/672]: 0x476e88 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:08:11.577 #10 NEW cov: 11670 ft: 11671 corp: 2/54b lim: 100 exec/s: 0 rss: 68Mb L: 53/53 MS: 3 CrossOver-CrossOver-InsertRepeatedBytes- 00:08:11.577 [2024-11-17 23:08:08.185499] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.577 [2024-11-17 23:08:08.185560] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.577 [2024-11-17 23:08:08.185637] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.577 [2024-11-17 23:08:08.185661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.836 #21 NEW cov: 11783 ft: 12288 corp: 3/108b lim: 100 exec/s: 0 rss: 69Mb L: 54/54 MS: 1 InsertByte- 00:08:11.836 [2024-11-17 23:08:08.235483] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.235511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.836 [2024-11-17 23:08:08.235574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.235590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.836 #22 NEW cov: 11789 ft: 12532 corp: 4/161b lim: 100 exec/s: 0 rss: 69Mb L: 53/54 MS: 1 ChangeByte- 00:08:11.836 [2024-11-17 23:08:08.275402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:655360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.275429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.836 #27 NEW cov: 11874 ft: 13667 corp: 5/183b lim: 100 exec/s: 0 rss: 69Mb L: 22/54 MS: 5 CrossOver-ChangeByte-EraseBytes-CopyPart-CrossOver- 00:08:11.836 [2024-11-17 23:08:08.315547] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:655360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.315574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.836 #28 NEW cov: 11874 ft: 13823 corp: 6/211b lim: 100 exec/s: 0 rss: 69Mb L: 28/54 MS: 1 InsertRepeatedBytes- 00:08:11.836 [2024-11-17 23:08:08.356432] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.356459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.836 [2024-11-17 23:08:08.356513] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.356529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.836 [2024-11-17 23:08:08.356590] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.356604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:11.836 [2024-11-17 23:08:08.356658] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.356677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:11.836 #29 NEW cov: 11883 ft: 14400 corp: 7/302b lim: 100 exec/s: 0 rss: 69Mb L: 91/91 MS: 1 InsertRepeatedBytes- 00:08:11.836 [2024-11-17 23:08:08.395992] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.836 [2024-11-17 23:08:08.396019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.836 [2024-11-17 23:08:08.396057] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.837 [2024-11-17 23:08:08.396071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.837 #30 NEW cov: 11883 ft: 14524 corp: 8/355b lim: 100 exec/s: 0 rss: 69Mb L: 53/91 MS: 1 ShuffleBytes- 00:08:11.837 [2024-11-17 23:08:08.436284] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.837 [2024-11-17 23:08:08.436310] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:11.837 [2024-11-17 23:08:08.436347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.837 [2024-11-17 23:08:08.436362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:11.837 [2024-11-17 23:08:08.436434] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:11.837 [2024-11-17 23:08:08.436449] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.096 #31 NEW cov: 11883 ft: 14832 corp: 9/416b lim: 100 exec/s: 0 rss: 69Mb L: 61/91 MS: 1 CopyPart- 00:08:12.096 [2024-11-17 23:08:08.476075] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:3104440330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.476101] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.096 #35 NEW cov: 11883 ft: 14905 corp: 10/447b lim: 100 exec/s: 0 rss: 69Mb L: 31/91 MS: 4 CopyPart-InsertByte-ChangeByte-CrossOver- 00:08:12.096 [2024-11-17 23:08:08.516209] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:655360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.516235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.096 #36 NEW cov: 11883 ft: 14954 corp: 11/476b lim: 100 exec/s: 0 rss: 69Mb L: 29/91 MS: 1 InsertByte- 00:08:12.096 [2024-11-17 23:08:08.556458] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.556484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.096 [2024-11-17 23:08:08.556539] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.556555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.096 #37 NEW cov: 11883 ft: 14965 corp: 12/534b lim: 100 exec/s: 0 rss: 69Mb L: 58/91 MS: 1 InsertRepeatedBytes- 00:08:12.096 [2024-11-17 23:08:08.596605] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.596634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.096 [2024-11-17 23:08:08.596689] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.596705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.096 #38 NEW cov: 11883 ft: 14989 corp: 13/582b lim: 100 exec/s: 0 rss: 69Mb L: 48/91 MS: 1 EraseBytes- 00:08:12.096 [2024-11-17 23:08:08.636549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.636575] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.096 #39 NEW cov: 11883 ft: 15081 corp: 14/613b lim: 100 exec/s: 0 rss: 69Mb L: 31/91 MS: 1 EraseBytes- 00:08:12.096 [2024-11-17 23:08:08.676817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.676843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.096 [2024-11-17 23:08:08.676879] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.096 [2024-11-17 23:08:08.676894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.096 #40 NEW cov: 11883 ft: 15115 corp: 15/666b lim: 100 exec/s: 0 rss: 69Mb L: 53/91 MS: 1 ChangeByte- 00:08:12.356 [2024-11-17 23:08:08.716941] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.716968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.717008] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.717023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.356 #41 NEW cov: 11883 ft: 15142 corp: 16/719b lim: 100 exec/s: 0 rss: 69Mb L: 53/91 MS: 1 ChangeBit- 00:08:12.356 [2024-11-17 23:08:08.757186] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.757212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.757250] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.757265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.757322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:12033 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.757337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.356 NEW_FUNC[1/1]: 0x194e708 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:12.356 #42 NEW cov: 11906 ft: 15229 corp: 17/780b lim: 100 exec/s: 0 rss: 70Mb L: 61/91 MS: 1 ChangeByte- 00:08:12.356 [2024-11-17 23:08:08.797145] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.797172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.797228] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.797243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.356 #43 NEW cov: 11906 ft: 15290 corp: 18/833b lim: 100 exec/s: 0 rss: 70Mb L: 53/91 MS: 1 ChangeBit- 00:08:12.356 [2024-11-17 23:08:08.837575] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.837602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.837653] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7599824371187712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.837669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.837722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.837738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.837790] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.837805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.356 #44 NEW cov: 11906 ft: 15331 corp: 19/925b lim: 100 exec/s: 44 rss: 70Mb L: 92/92 MS: 1 InsertByte- 00:08:12.356 [2024-11-17 23:08:08.877390] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.877415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.877452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:2828 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.877465] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.356 #45 NEW cov: 11906 ft: 15367 corp: 20/968b lim: 100 exec/s: 45 rss: 70Mb L: 43/92 MS: 1 CrossOver- 00:08:12.356 [2024-11-17 23:08:08.917481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.917508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.356 [2024-11-17 23:08:08.917549] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.917564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.356 #46 NEW cov: 11906 ft: 15380 corp: 21/1021b lim: 100 exec/s: 46 rss: 70Mb L: 53/92 MS: 1 ShuffleBytes- 00:08:12.356 [2024-11-17 23:08:08.947773] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.356 [2024-11-17 23:08:08.947800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.357 [2024-11-17 23:08:08.947840] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.357 [2024-11-17 23:08:08.947855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.357 [2024-11-17 23:08:08.947915] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.357 [2024-11-17 23:08:08.947934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.357 #47 NEW cov: 11906 ft: 15386 corp: 22/1084b lim: 100 exec/s: 47 rss: 70Mb L: 63/92 MS: 1 CrossOver- 00:08:12.616 [2024-11-17 23:08:08.987868] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:08.987895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.616 [2024-11-17 23:08:08.987931] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:08.987947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.616 [2024-11-17 23:08:08.988016] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:4123389606957234489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:08.988031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.616 #48 NEW cov: 11906 ft: 15416 corp: 23/1148b lim: 100 exec/s: 48 rss: 70Mb L: 64/92 MS: 1 InsertRepeatedBytes- 00:08:12.616 [2024-11-17 23:08:09.028138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:09.028165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.616 [2024-11-17 23:08:09.028212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7599824371187712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:09.028230] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.616 [2024-11-17 23:08:09.028282] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:09.028297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.616 [2024-11-17 23:08:09.028352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.616 [2024-11-17 23:08:09.028366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.616 #49 NEW cov: 11906 ft: 15433 corp: 24/1240b lim: 100 exec/s: 49 rss: 70Mb L: 92/92 MS: 1 ChangeByte- 00:08:12.617 [2024-11-17 23:08:09.067828] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:655360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.067855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.617 #50 NEW cov: 11906 ft: 15465 corp: 25/1268b lim: 100 exec/s: 50 rss: 70Mb L: 28/92 MS: 1 CopyPart- 00:08:12.617 [2024-11-17 23:08:09.108379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.108406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.617 [2024-11-17 23:08:09.108469] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7599824371187712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.108486] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.617 [2024-11-17 23:08:09.108585] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.108606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.617 [2024-11-17 23:08:09.108659] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.108674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:12.617 #51 NEW cov: 11906 ft: 15480 corp: 26/1360b lim: 100 exec/s: 51 rss: 70Mb L: 92/92 MS: 1 ChangeBit- 00:08:12.617 [2024-11-17 23:08:09.148212] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.148238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.617 [2024-11-17 23:08:09.148309] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.148325] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.617 #52 NEW cov: 11906 ft: 15500 corp: 27/1416b lim: 100 exec/s: 52 rss: 70Mb L: 56/92 MS: 1 EraseBytes- 00:08:12.617 [2024-11-17 23:08:09.188333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.188359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.617 [2024-11-17 23:08:09.188408] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.188426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.617 #53 NEW cov: 11906 ft: 15507 corp: 28/1469b lim: 100 exec/s: 53 rss: 70Mb L: 53/92 MS: 1 ChangeBinInt- 00:08:12.617 [2024-11-17 23:08:09.228430] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.228457] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.617 [2024-11-17 23:08:09.228515] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.617 [2024-11-17 23:08:09.228536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.875 #54 NEW cov: 11906 ft: 15515 corp: 29/1527b lim: 100 exec/s: 54 rss: 70Mb L: 58/92 MS: 1 ChangeBinInt- 00:08:12.875 [2024-11-17 23:08:09.268594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.875 [2024-11-17 23:08:09.268620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.875 [2024-11-17 23:08:09.268667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.875 [2024-11-17 23:08:09.268687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.875 #55 NEW cov: 11906 ft: 15559 corp: 30/1581b lim: 100 exec/s: 55 rss: 70Mb L: 54/92 MS: 1 ChangeByte- 00:08:12.875 [2024-11-17 23:08:09.298636] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167782912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.875 [2024-11-17 23:08:09.298661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.876 [2024-11-17 23:08:09.298714] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.298729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.876 #56 NEW cov: 11906 ft: 15571 corp: 31/1635b lim: 100 exec/s: 56 rss: 70Mb L: 54/92 MS: 1 InsertByte- 00:08:12.876 [2024-11-17 23:08:09.338778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.338804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.876 [2024-11-17 23:08:09.338853] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:2049 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.338871] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.876 #57 NEW cov: 11906 ft: 15594 corp: 32/1689b lim: 100 exec/s: 57 rss: 70Mb L: 54/92 MS: 1 InsertByte- 00:08:12.876 [2024-11-17 23:08:09.379010] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:720575941956403200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.379036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.876 [2024-11-17 23:08:09.379083] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.379103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.876 [2024-11-17 23:08:09.379157] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.379171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.876 #58 NEW cov: 11906 ft: 15599 corp: 33/1754b lim: 100 exec/s: 58 rss: 70Mb L: 65/92 MS: 1 CMP- DE: "^\001\000\000"- 00:08:12.876 [2024-11-17 23:08:09.418927] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:72057640092041310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.418952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.876 #59 NEW cov: 11906 ft: 15650 corp: 34/1789b lim: 100 exec/s: 59 rss: 70Mb L: 35/92 MS: 1 PersAutoDict- DE: "^\001\000\000"- 00:08:12.876 [2024-11-17 23:08:09.459289] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.459316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:12.876 [2024-11-17 23:08:09.459350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.459366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:12.876 [2024-11-17 23:08:09.459421] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:12.876 [2024-11-17 23:08:09.459436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:12.876 #60 NEW cov: 11906 ft: 15664 corp: 35/1858b lim: 100 exec/s: 60 rss: 70Mb L: 69/92 MS: 1 CopyPart- 00:08:13.135 [2024-11-17 23:08:09.499203] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.499234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.499281] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.499302] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.135 #61 NEW cov: 11906 ft: 15667 corp: 36/1917b lim: 100 exec/s: 61 rss: 70Mb L: 59/92 MS: 1 InsertByte- 00:08:13.135 [2024-11-17 23:08:09.539322] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.539348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.539404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.539419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.135 #62 NEW cov: 11906 ft: 15719 corp: 37/1970b lim: 100 exec/s: 62 rss: 70Mb L: 53/92 MS: 1 ShuffleBytes- 00:08:13.135 [2024-11-17 23:08:09.579641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.579668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.579711] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:2341178251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.579726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.579778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.579793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.135 #63 NEW cov: 11906 ft: 15731 corp: 38/2033b lim: 100 exec/s: 63 rss: 70Mb L: 63/92 MS: 1 InsertRepeatedBytes- 00:08:13.135 [2024-11-17 23:08:09.619746] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.619773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.619827] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.619845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.619896] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:12033 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.619911] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.135 #64 NEW cov: 11906 ft: 15740 corp: 39/2094b lim: 100 exec/s: 64 rss: 70Mb L: 61/92 MS: 1 CopyPart- 00:08:13.135 [2024-11-17 23:08:09.659713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167782912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.659739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.659778] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.659793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.135 #65 NEW cov: 11906 ft: 15750 corp: 40/2148b lim: 100 exec/s: 65 rss: 70Mb L: 54/92 MS: 1 ChangeBit- 00:08:13.135 [2024-11-17 23:08:09.699959] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.699985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.700022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.700037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.700090] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.700105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.135 #66 NEW cov: 11906 ft: 15763 corp: 41/2220b lim: 100 exec/s: 66 rss: 70Mb L: 72/92 MS: 1 CopyPart- 00:08:13.135 [2024-11-17 23:08:09.739938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.739965] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.135 [2024-11-17 23:08:09.740018] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:403726925824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.135 [2024-11-17 23:08:09.740038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.395 #67 NEW cov: 11906 ft: 15768 corp: 42/2272b lim: 100 exec/s: 67 rss: 70Mb L: 52/92 MS: 1 PersAutoDict- DE: "^\001\000\000"- 00:08:13.395 [2024-11-17 23:08:09.779888] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:4849664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.779914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.395 #68 NEW cov: 11906 ft: 15775 corp: 43/2300b lim: 100 exec/s: 68 rss: 70Mb L: 28/92 MS: 1 ChangeBit- 00:08:13.395 [2024-11-17 23:08:09.820424] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.820450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.395 [2024-11-17 23:08:09.820514] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:7599824371187712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.820537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.395 [2024-11-17 23:08:09.820594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:1125899906842624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.820610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:08:13.395 [2024-11-17 23:08:09.820667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.820686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:08:13.395 #69 NEW cov: 11906 ft: 15788 corp: 44/2392b lim: 100 exec/s: 69 rss: 70Mb L: 92/92 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\004"- 00:08:13.395 [2024-11-17 23:08:09.860252] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168820736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.860282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:08:13.395 [2024-11-17 23:08:09.860332] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:08:13.395 [2024-11-17 23:08:09.860352] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:08:13.395 #70 NEW cov: 11906 ft: 15812 corp: 45/2445b lim: 100 exec/s: 35 rss: 70Mb L: 53/92 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\004"- 00:08:13.395 #70 DONE cov: 11906 ft: 15812 corp: 45/2445b lim: 100 exec/s: 35 rss: 70Mb 00:08:13.395 ###### Recommended dictionary. ###### 00:08:13.395 "^\001\000\000" # Uses: 2 00:08:13.395 "\000\000\000\000\000\000\000\004" # Uses: 1 00:08:13.395 ###### End of recommended dictionary. ###### 00:08:13.395 Done 70 runs in 2 second(s) 00:08:13.395 23:08:10 -- nvmf/run.sh@46 -- # rm -rf /tmp/fuzz_json_24.conf 00:08:13.655 23:08:10 -- ../common.sh@72 -- # (( i++ )) 00:08:13.655 23:08:10 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.655 23:08:10 -- nvmf/run.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:08:13.655 00:08:13.655 real 1m3.754s 00:08:13.655 user 1m40.505s 00:08:13.655 sys 0m6.939s 00:08:13.655 23:08:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.655 23:08:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 ************************************ 00:08:13.655 END TEST nvmf_fuzz 00:08:13.655 ************************************ 00:08:13.655 23:08:10 -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:08:13.655 23:08:10 -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:08:13.655 23:08:10 -- fuzz/llvm.sh@20 -- # run_test vfio_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:13.655 23:08:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.655 23:08:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.655 23:08:10 -- common/autotest_common.sh@10 -- # set +x 00:08:13.655 ************************************ 00:08:13.655 START TEST vfio_fuzz 00:08:13.655 ************************************ 00:08:13.655 23:08:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:08:13.655 * Looking for test storage... 00:08:13.655 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:13.655 23:08:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:13.655 23:08:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:13.655 23:08:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.655 23:08:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.655 23:08:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.655 23:08:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.655 23:08:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.655 23:08:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.655 23:08:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.655 23:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.655 23:08:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.655 23:08:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.655 23:08:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.655 23:08:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.655 23:08:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.655 23:08:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.655 23:08:10 -- scripts/common.sh@344 -- # : 1 00:08:13.655 23:08:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.655 23:08:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.655 23:08:10 -- scripts/common.sh@364 -- # decimal 1 00:08:13.655 23:08:10 -- scripts/common.sh@352 -- # local d=1 00:08:13.655 23:08:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.655 23:08:10 -- scripts/common.sh@354 -- # echo 1 00:08:13.655 23:08:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.655 23:08:10 -- scripts/common.sh@365 -- # decimal 2 00:08:13.655 23:08:10 -- scripts/common.sh@352 -- # local d=2 00:08:13.655 23:08:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.655 23:08:10 -- scripts/common.sh@354 -- # echo 2 00:08:13.656 23:08:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.656 23:08:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.656 23:08:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.656 23:08:10 -- scripts/common.sh@367 -- # return 0 00:08:13.656 23:08:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.656 23:08:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.656 --rc genhtml_branch_coverage=1 00:08:13.656 --rc genhtml_function_coverage=1 00:08:13.656 --rc genhtml_legend=1 00:08:13.656 --rc geninfo_all_blocks=1 00:08:13.656 --rc geninfo_unexecuted_blocks=1 00:08:13.656 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.656 ' 00:08:13.656 23:08:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.656 --rc genhtml_branch_coverage=1 00:08:13.656 --rc genhtml_function_coverage=1 00:08:13.656 --rc genhtml_legend=1 00:08:13.656 --rc geninfo_all_blocks=1 00:08:13.656 --rc geninfo_unexecuted_blocks=1 00:08:13.656 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.656 ' 00:08:13.656 23:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.656 --rc genhtml_branch_coverage=1 00:08:13.656 --rc genhtml_function_coverage=1 00:08:13.656 --rc genhtml_legend=1 00:08:13.656 --rc geninfo_all_blocks=1 00:08:13.656 --rc geninfo_unexecuted_blocks=1 00:08:13.656 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.656 ' 00:08:13.656 23:08:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.656 --rc genhtml_branch_coverage=1 00:08:13.656 --rc genhtml_function_coverage=1 00:08:13.656 --rc genhtml_legend=1 00:08:13.656 --rc geninfo_all_blocks=1 00:08:13.656 --rc geninfo_unexecuted_blocks=1 00:08:13.656 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.656 ' 00:08:13.656 23:08:10 -- vfio/run.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:08:13.656 23:08:10 -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:08:13.656 23:08:10 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:13.656 23:08:10 -- common/autotest_common.sh@34 -- # set -e 00:08:13.656 23:08:10 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:13.656 23:08:10 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:13.656 23:08:10 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:13.656 23:08:10 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:08:13.656 23:08:10 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:13.656 23:08:10 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:13.656 23:08:10 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:13.656 23:08:10 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:13.656 23:08:10 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:13.656 23:08:10 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:13.656 23:08:10 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:13.656 23:08:10 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:13.656 23:08:10 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:13.656 23:08:10 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:13.656 23:08:10 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:13.656 23:08:10 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:13.656 23:08:10 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:13.656 23:08:10 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:13.656 23:08:10 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:13.656 23:08:10 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:13.656 23:08:10 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:13.656 23:08:10 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:13.656 23:08:10 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:13.656 23:08:10 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:13.656 23:08:10 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:13.656 23:08:10 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:13.656 23:08:10 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:13.656 23:08:10 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:13.656 23:08:10 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:13.656 23:08:10 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:13.656 23:08:10 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:13.656 23:08:10 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:13.656 23:08:10 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:13.656 23:08:10 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:13.656 23:08:10 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:13.656 23:08:10 -- common/build_config.sh@35 -- # CONFIG_FUZZER=y 00:08:13.656 23:08:10 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:13.656 23:08:10 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:13.656 23:08:10 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:13.656 23:08:10 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:13.656 23:08:10 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:13.656 23:08:10 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:13.656 23:08:10 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:13.656 23:08:10 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:13.656 23:08:10 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:13.656 23:08:10 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:13.656 23:08:10 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:13.656 23:08:10 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:13.656 23:08:10 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:13.656 23:08:10 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:13.656 23:08:10 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:13.656 23:08:10 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:13.656 23:08:10 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:13.656 23:08:10 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:13.656 23:08:10 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:13.656 23:08:10 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:13.656 23:08:10 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:13.656 23:08:10 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:13.656 23:08:10 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:08:13.656 23:08:10 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:13.656 23:08:10 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:13.656 23:08:10 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:08:13.656 23:08:10 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:13.656 23:08:10 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:13.656 23:08:10 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:13.656 23:08:10 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:13.656 23:08:10 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:13.656 23:08:10 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:13.656 23:08:10 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:13.656 23:08:10 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:13.656 23:08:10 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:13.656 23:08:10 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:13.656 23:08:10 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:13.656 23:08:10 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:13.656 23:08:10 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:13.656 23:08:10 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:13.656 23:08:10 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:13.656 23:08:10 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:13.656 23:08:10 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:08:13.656 23:08:10 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:13.919 23:08:10 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:08:13.919 23:08:10 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:13.919 23:08:10 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:13.919 23:08:10 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:08:13.919 23:08:10 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:13.919 23:08:10 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:13.919 23:08:10 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:13.919 23:08:10 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:13.919 23:08:10 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:13.919 23:08:10 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:13.919 23:08:10 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:13.919 23:08:10 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:08:13.919 23:08:10 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:13.919 #define SPDK_CONFIG_H 00:08:13.919 #define SPDK_CONFIG_APPS 1 00:08:13.919 #define SPDK_CONFIG_ARCH native 00:08:13.919 #undef SPDK_CONFIG_ASAN 00:08:13.919 #undef SPDK_CONFIG_AVAHI 00:08:13.919 #undef SPDK_CONFIG_CET 00:08:13.919 #define SPDK_CONFIG_COVERAGE 1 00:08:13.919 #define SPDK_CONFIG_CROSS_PREFIX 00:08:13.919 #undef SPDK_CONFIG_CRYPTO 00:08:13.919 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:13.919 #undef SPDK_CONFIG_CUSTOMOCF 00:08:13.919 #undef SPDK_CONFIG_DAOS 00:08:13.919 #define SPDK_CONFIG_DAOS_DIR 00:08:13.919 #define SPDK_CONFIG_DEBUG 1 00:08:13.919 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:13.919 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:08:13.919 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:13.919 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:13.919 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:13.919 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:08:13.919 #define SPDK_CONFIG_EXAMPLES 1 00:08:13.919 #undef SPDK_CONFIG_FC 00:08:13.919 #define SPDK_CONFIG_FC_PATH 00:08:13.919 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:13.919 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:13.919 #undef SPDK_CONFIG_FUSE 00:08:13.919 #define SPDK_CONFIG_FUZZER 1 00:08:13.919 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:08:13.919 #undef SPDK_CONFIG_GOLANG 00:08:13.919 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:13.919 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:13.919 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:13.919 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:13.919 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:13.919 #define SPDK_CONFIG_IDXD 1 00:08:13.919 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:13.919 #undef SPDK_CONFIG_IPSEC_MB 00:08:13.919 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:13.919 #define SPDK_CONFIG_ISAL 1 00:08:13.919 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:13.919 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:13.919 #define SPDK_CONFIG_LIBDIR 00:08:13.919 #undef SPDK_CONFIG_LTO 00:08:13.919 #define SPDK_CONFIG_MAX_LCORES 00:08:13.919 #define SPDK_CONFIG_NVME_CUSE 1 00:08:13.919 #undef SPDK_CONFIG_OCF 00:08:13.919 #define SPDK_CONFIG_OCF_PATH 00:08:13.919 #define SPDK_CONFIG_OPENSSL_PATH 00:08:13.919 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:13.919 #undef SPDK_CONFIG_PGO_USE 00:08:13.919 #define SPDK_CONFIG_PREFIX /usr/local 00:08:13.919 #undef SPDK_CONFIG_RAID5F 00:08:13.919 #undef SPDK_CONFIG_RBD 00:08:13.919 #define SPDK_CONFIG_RDMA 1 00:08:13.919 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:13.919 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:13.919 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:13.919 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:13.919 #undef SPDK_CONFIG_SHARED 00:08:13.919 #undef SPDK_CONFIG_SMA 00:08:13.919 #define SPDK_CONFIG_TESTS 1 00:08:13.919 #undef SPDK_CONFIG_TSAN 00:08:13.919 #define SPDK_CONFIG_UBLK 1 00:08:13.919 #define SPDK_CONFIG_UBSAN 1 00:08:13.919 #undef SPDK_CONFIG_UNIT_TESTS 00:08:13.919 #undef SPDK_CONFIG_URING 00:08:13.919 #define SPDK_CONFIG_URING_PATH 00:08:13.919 #undef SPDK_CONFIG_URING_ZNS 00:08:13.919 #undef SPDK_CONFIG_USDT 00:08:13.919 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:13.919 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:13.919 #define SPDK_CONFIG_VFIO_USER 1 00:08:13.919 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:13.919 #define SPDK_CONFIG_VHOST 1 00:08:13.919 #define SPDK_CONFIG_VIRTIO 1 00:08:13.919 #undef SPDK_CONFIG_VTUNE 00:08:13.919 #define SPDK_CONFIG_VTUNE_DIR 00:08:13.919 #define SPDK_CONFIG_WERROR 1 00:08:13.919 #define SPDK_CONFIG_WPDK_DIR 00:08:13.919 #undef SPDK_CONFIG_XNVME 00:08:13.919 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:13.919 23:08:10 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:13.919 23:08:10 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:13.919 23:08:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.919 23:08:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.919 23:08:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.919 23:08:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.919 23:08:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.919 23:08:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.919 23:08:10 -- paths/export.sh@5 -- # export PATH 00:08:13.919 23:08:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.919 23:08:10 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:13.919 23:08:10 -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:08:13.919 23:08:10 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:13.919 23:08:10 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:08:13.919 23:08:10 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:13.920 23:08:10 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:08:13.920 23:08:10 -- pm/common@16 -- # TEST_TAG=N/A 00:08:13.920 23:08:10 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:08:13.920 23:08:10 -- common/autotest_common.sh@52 -- # : 1 00:08:13.920 23:08:10 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:13.920 23:08:10 -- common/autotest_common.sh@56 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:13.920 23:08:10 -- common/autotest_common.sh@58 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:13.920 23:08:10 -- common/autotest_common.sh@60 -- # : 1 00:08:13.920 23:08:10 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:13.920 23:08:10 -- common/autotest_common.sh@62 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:13.920 23:08:10 -- common/autotest_common.sh@64 -- # : 00:08:13.920 23:08:10 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:13.920 23:08:10 -- common/autotest_common.sh@66 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:13.920 23:08:10 -- common/autotest_common.sh@68 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:13.920 23:08:10 -- common/autotest_common.sh@70 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:13.920 23:08:10 -- common/autotest_common.sh@72 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:13.920 23:08:10 -- common/autotest_common.sh@74 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:13.920 23:08:10 -- common/autotest_common.sh@76 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:13.920 23:08:10 -- common/autotest_common.sh@78 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:13.920 23:08:10 -- common/autotest_common.sh@80 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:13.920 23:08:10 -- common/autotest_common.sh@82 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:13.920 23:08:10 -- common/autotest_common.sh@84 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:13.920 23:08:10 -- common/autotest_common.sh@86 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:13.920 23:08:10 -- common/autotest_common.sh@88 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:13.920 23:08:10 -- common/autotest_common.sh@90 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:13.920 23:08:10 -- common/autotest_common.sh@92 -- # : 1 00:08:13.920 23:08:10 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:13.920 23:08:10 -- common/autotest_common.sh@94 -- # : 1 00:08:13.920 23:08:10 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:13.920 23:08:10 -- common/autotest_common.sh@96 -- # : rdma 00:08:13.920 23:08:10 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:13.920 23:08:10 -- common/autotest_common.sh@98 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:13.920 23:08:10 -- common/autotest_common.sh@100 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:13.920 23:08:10 -- common/autotest_common.sh@102 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:13.920 23:08:10 -- common/autotest_common.sh@104 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:13.920 23:08:10 -- common/autotest_common.sh@106 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:13.920 23:08:10 -- common/autotest_common.sh@108 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:13.920 23:08:10 -- common/autotest_common.sh@110 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:13.920 23:08:10 -- common/autotest_common.sh@112 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:13.920 23:08:10 -- common/autotest_common.sh@114 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:13.920 23:08:10 -- common/autotest_common.sh@116 -- # : 1 00:08:13.920 23:08:10 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:13.920 23:08:10 -- common/autotest_common.sh@118 -- # : 00:08:13.920 23:08:10 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:13.920 23:08:10 -- common/autotest_common.sh@120 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:13.920 23:08:10 -- common/autotest_common.sh@122 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:13.920 23:08:10 -- common/autotest_common.sh@124 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:13.920 23:08:10 -- common/autotest_common.sh@126 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:13.920 23:08:10 -- common/autotest_common.sh@128 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:13.920 23:08:10 -- common/autotest_common.sh@130 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:13.920 23:08:10 -- common/autotest_common.sh@132 -- # : 00:08:13.920 23:08:10 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:13.920 23:08:10 -- common/autotest_common.sh@134 -- # : true 00:08:13.920 23:08:10 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:13.920 23:08:10 -- common/autotest_common.sh@136 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:13.920 23:08:10 -- common/autotest_common.sh@138 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:13.920 23:08:10 -- common/autotest_common.sh@140 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:13.920 23:08:10 -- common/autotest_common.sh@142 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:13.920 23:08:10 -- common/autotest_common.sh@144 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:13.920 23:08:10 -- common/autotest_common.sh@146 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:13.920 23:08:10 -- common/autotest_common.sh@148 -- # : 00:08:13.920 23:08:10 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:13.920 23:08:10 -- common/autotest_common.sh@150 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:13.920 23:08:10 -- common/autotest_common.sh@152 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:13.920 23:08:10 -- common/autotest_common.sh@154 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:13.920 23:08:10 -- common/autotest_common.sh@156 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:13.920 23:08:10 -- common/autotest_common.sh@158 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:13.920 23:08:10 -- common/autotest_common.sh@160 -- # : 0 00:08:13.920 23:08:10 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:13.920 23:08:10 -- common/autotest_common.sh@163 -- # : 00:08:13.920 23:08:10 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:13.920 23:08:10 -- common/autotest_common.sh@165 -- # : 0 00:08:13.921 23:08:10 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:13.921 23:08:10 -- common/autotest_common.sh@167 -- # : 0 00:08:13.921 23:08:10 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:13.921 23:08:10 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:13.921 23:08:10 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.921 23:08:10 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:13.921 23:08:10 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:13.921 23:08:10 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:08:13.921 23:08:10 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:13.921 23:08:10 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:13.921 23:08:10 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.921 23:08:10 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:13.921 23:08:10 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.921 23:08:10 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:13.921 23:08:10 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:13.921 23:08:10 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:13.921 23:08:10 -- common/autotest_common.sh@196 -- # cat 00:08:13.921 23:08:10 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:13.921 23:08:10 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.921 23:08:10 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:13.921 23:08:10 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.921 23:08:10 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:13.921 23:08:10 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:13.921 23:08:10 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:13.921 23:08:10 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:13.921 23:08:10 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:08:13.921 23:08:10 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:13.921 23:08:10 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:08:13.921 23:08:10 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.921 23:08:10 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:13.921 23:08:10 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.921 23:08:10 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:13.921 23:08:10 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:13.921 23:08:10 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:13.921 23:08:10 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.921 23:08:10 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:13.921 23:08:10 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:13.921 23:08:10 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:13.921 23:08:10 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:13.921 23:08:10 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@250 -- # [[ 1 -eq 1 ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@250 -- # _LCOV=1 00:08:13.921 23:08:10 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:13.921 23:08:10 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:13.921 23:08:10 -- common/autotest_common.sh@255 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:13.921 23:08:10 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:13.921 23:08:10 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:13.921 23:08:10 -- common/autotest_common.sh@259 -- # valgrind= 00:08:13.921 23:08:10 -- common/autotest_common.sh@265 -- # uname -s 00:08:13.921 23:08:10 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:13.921 23:08:10 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:13.921 23:08:10 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:13.921 23:08:10 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:13.921 23:08:10 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:13.921 23:08:10 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:13.921 23:08:10 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:13.921 23:08:10 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:13.921 23:08:10 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:08:13.921 23:08:10 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:13.921 23:08:10 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:13.921 23:08:10 -- common/autotest_common.sh@319 -- # [[ -z 1308099 ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@319 -- # kill -0 1308099 00:08:13.921 23:08:10 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:13.921 23:08:10 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:13.921 23:08:10 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:13.921 23:08:10 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:13.921 23:08:10 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:13.921 23:08:10 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:13.921 23:08:10 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:13.921 23:08:10 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.mqnrnE 00:08:13.921 23:08:10 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:13.921 23:08:10 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:13.921 23:08:10 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.mqnrnE/tests/vfio /tmp/spdk.mqnrnE 00:08:13.921 23:08:10 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:13.921 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.921 23:08:10 -- common/autotest_common.sh@328 -- # df -T 00:08:13.921 23:08:10 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:13.921 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:13.921 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=54447955968 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730574336 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=7282618368 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864027648 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865285120 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=12340117504 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346118144 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=6000640 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865088512 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865289216 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=200704 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:13.922 23:08:10 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:13.922 23:08:10 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:13.922 23:08:10 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:13.922 23:08:10 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:13.922 * Looking for test storage... 00:08:13.922 23:08:10 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:13.922 23:08:10 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:13.922 23:08:10 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:13.922 23:08:10 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:13.922 23:08:10 -- common/autotest_common.sh@373 -- # mount=/ 00:08:13.922 23:08:10 -- common/autotest_common.sh@375 -- # target_space=54447955968 00:08:13.922 23:08:10 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:13.922 23:08:10 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:13.922 23:08:10 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:13.922 23:08:10 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:13.922 23:08:10 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:13.922 23:08:10 -- common/autotest_common.sh@382 -- # new_size=9497210880 00:08:13.922 23:08:10 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:13.922 23:08:10 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:13.922 23:08:10 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:13.922 23:08:10 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:13.922 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:08:13.922 23:08:10 -- common/autotest_common.sh@390 -- # return 0 00:08:13.922 23:08:10 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:13.922 23:08:10 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:13.922 23:08:10 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:13.922 23:08:10 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:13.922 23:08:10 -- common/autotest_common.sh@1682 -- # true 00:08:13.922 23:08:10 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:13.922 23:08:10 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:13.922 23:08:10 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:13.922 23:08:10 -- common/autotest_common.sh@27 -- # exec 00:08:13.922 23:08:10 -- common/autotest_common.sh@29 -- # exec 00:08:13.922 23:08:10 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:13.922 23:08:10 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:13.922 23:08:10 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:13.922 23:08:10 -- common/autotest_common.sh@18 -- # set -x 00:08:13.922 23:08:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:13.922 23:08:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:13.922 23:08:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.922 23:08:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.922 23:08:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.922 23:08:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.922 23:08:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.922 23:08:10 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.922 23:08:10 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.922 23:08:10 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.922 23:08:10 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.922 23:08:10 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.922 23:08:10 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.922 23:08:10 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.922 23:08:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.922 23:08:10 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.922 23:08:10 -- scripts/common.sh@344 -- # : 1 00:08:13.922 23:08:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.922 23:08:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.922 23:08:10 -- scripts/common.sh@364 -- # decimal 1 00:08:13.922 23:08:10 -- scripts/common.sh@352 -- # local d=1 00:08:13.922 23:08:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.922 23:08:10 -- scripts/common.sh@354 -- # echo 1 00:08:13.922 23:08:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.922 23:08:10 -- scripts/common.sh@365 -- # decimal 2 00:08:13.922 23:08:10 -- scripts/common.sh@352 -- # local d=2 00:08:13.922 23:08:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.922 23:08:10 -- scripts/common.sh@354 -- # echo 2 00:08:13.922 23:08:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.922 23:08:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.922 23:08:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.922 23:08:10 -- scripts/common.sh@367 -- # return 0 00:08:13.922 23:08:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.922 23:08:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.922 --rc genhtml_branch_coverage=1 00:08:13.922 --rc genhtml_function_coverage=1 00:08:13.922 --rc genhtml_legend=1 00:08:13.922 --rc geninfo_all_blocks=1 00:08:13.922 --rc geninfo_unexecuted_blocks=1 00:08:13.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.922 ' 00:08:13.922 23:08:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.922 --rc genhtml_branch_coverage=1 00:08:13.922 --rc genhtml_function_coverage=1 00:08:13.922 --rc genhtml_legend=1 00:08:13.922 --rc geninfo_all_blocks=1 00:08:13.922 --rc geninfo_unexecuted_blocks=1 00:08:13.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.922 ' 00:08:13.922 23:08:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.922 --rc genhtml_branch_coverage=1 00:08:13.922 --rc genhtml_function_coverage=1 00:08:13.922 --rc genhtml_legend=1 00:08:13.923 --rc geninfo_all_blocks=1 00:08:13.923 --rc geninfo_unexecuted_blocks=1 00:08:13.923 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.923 ' 00:08:13.923 23:08:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.923 --rc genhtml_branch_coverage=1 00:08:13.923 --rc genhtml_function_coverage=1 00:08:13.923 --rc genhtml_legend=1 00:08:13.923 --rc geninfo_all_blocks=1 00:08:13.923 --rc geninfo_unexecuted_blocks=1 00:08:13.923 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.923 ' 00:08:13.923 23:08:10 -- vfio/run.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:08:13.923 23:08:10 -- ../common.sh@8 -- # pids=() 00:08:13.923 23:08:10 -- vfio/run.sh@58 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:13.923 23:08:10 -- vfio/run.sh@59 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:08:13.923 23:08:10 -- vfio/run.sh@59 -- # fuzz_num=7 00:08:13.923 23:08:10 -- vfio/run.sh@60 -- # (( fuzz_num != 0 )) 00:08:13.923 23:08:10 -- vfio/run.sh@62 -- # trap 'cleanup /tmp/vfio-user-*; exit 1' SIGINT SIGTERM EXIT 00:08:13.923 23:08:10 -- vfio/run.sh@65 -- # mem_size=0 00:08:13.923 23:08:10 -- vfio/run.sh@66 -- # [[ 1 -eq 1 ]] 00:08:13.923 23:08:10 -- vfio/run.sh@67 -- # start_llvm_fuzz_short 7 1 00:08:13.923 23:08:10 -- ../common.sh@69 -- # local fuzz_num=7 00:08:13.923 23:08:10 -- ../common.sh@70 -- # local time=1 00:08:13.923 23:08:10 -- ../common.sh@72 -- # (( i = 0 )) 00:08:13.923 23:08:10 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:13.923 23:08:10 -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:08:13.923 23:08:10 -- vfio/run.sh@22 -- # local fuzzer_type=0 00:08:13.923 23:08:10 -- vfio/run.sh@23 -- # local timen=1 00:08:13.923 23:08:10 -- vfio/run.sh@24 -- # local core=0x1 00:08:13.923 23:08:10 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:13.923 23:08:10 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:08:13.923 23:08:10 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:08:13.923 23:08:10 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:08:13.923 23:08:10 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:08:13.923 23:08:10 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:13.923 23:08:10 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:08:13.923 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:13.923 23:08:10 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:08:13.923 [2024-11-17 23:08:10.528970] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.923 [2024-11-17 23:08:10.529042] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308162 ] 00:08:14.183 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.183 [2024-11-17 23:08:10.600822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.183 [2024-11-17 23:08:10.670775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.183 [2024-11-17 23:08:10.670912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.442 INFO: Running with entropic power schedule (0xFF, 100). 00:08:14.442 INFO: Seed: 2931662440 00:08:14.442 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:14.442 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:14.442 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:08:14.442 INFO: A corpus is not provided, starting from an empty corpus 00:08:14.442 #2 INITED exec/s: 0 rss: 62Mb 00:08:14.442 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:14.442 This may also happen if the target rejected all inputs we tried so far 00:08:14.960 NEW_FUNC[1/631]: 0x43a218 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:85 00:08:14.960 NEW_FUNC[2/631]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:14.960 #3 NEW cov: 10761 ft: 10733 corp: 2/21b lim: 60 exec/s: 0 rss: 68Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:08:14.960 #4 NEW cov: 10785 ft: 13494 corp: 3/48b lim: 60 exec/s: 0 rss: 69Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:08:15.219 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:15.219 #5 NEW cov: 10802 ft: 14868 corp: 4/71b lim: 60 exec/s: 0 rss: 70Mb L: 23/27 MS: 1 EraseBytes- 00:08:15.478 #6 NEW cov: 10802 ft: 15800 corp: 5/94b lim: 60 exec/s: 6 rss: 70Mb L: 23/27 MS: 1 ChangeBinInt- 00:08:15.737 #7 NEW cov: 10802 ft: 15931 corp: 6/114b lim: 60 exec/s: 7 rss: 70Mb L: 20/27 MS: 1 ShuffleBytes- 00:08:15.737 #8 NEW cov: 10802 ft: 16168 corp: 7/128b lim: 60 exec/s: 8 rss: 70Mb L: 14/27 MS: 1 CrossOver- 00:08:15.996 #14 NEW cov: 10802 ft: 16403 corp: 8/148b lim: 60 exec/s: 14 rss: 70Mb L: 20/27 MS: 1 ChangeBit- 00:08:16.255 #15 NEW cov: 10802 ft: 16428 corp: 9/184b lim: 60 exec/s: 15 rss: 70Mb L: 36/36 MS: 1 InsertRepeatedBytes- 00:08:16.255 #16 NEW cov: 10809 ft: 16653 corp: 10/204b lim: 60 exec/s: 16 rss: 70Mb L: 20/36 MS: 1 ChangeByte- 00:08:16.514 #17 NEW cov: 10809 ft: 16697 corp: 11/232b lim: 60 exec/s: 8 rss: 70Mb L: 28/36 MS: 1 CMP- DE: "l\000\000\000\000\000\000\000"- 00:08:16.514 #17 DONE cov: 10809 ft: 16697 corp: 11/232b lim: 60 exec/s: 8 rss: 70Mb 00:08:16.514 ###### Recommended dictionary. ###### 00:08:16.514 "l\000\000\000\000\000\000\000" # Uses: 0 00:08:16.514 ###### End of recommended dictionary. ###### 00:08:16.514 Done 17 runs in 2 second(s) 00:08:16.774 23:08:13 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-0 00:08:16.774 23:08:13 -- ../common.sh@72 -- # (( i++ )) 00:08:16.774 23:08:13 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:16.774 23:08:13 -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:08:16.774 23:08:13 -- vfio/run.sh@22 -- # local fuzzer_type=1 00:08:16.774 23:08:13 -- vfio/run.sh@23 -- # local timen=1 00:08:16.774 23:08:13 -- vfio/run.sh@24 -- # local core=0x1 00:08:16.774 23:08:13 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:16.774 23:08:13 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:08:16.774 23:08:13 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:08:16.774 23:08:13 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:08:16.774 23:08:13 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:08:16.774 23:08:13 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:16.774 23:08:13 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:08:16.774 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:16.774 23:08:13 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:08:16.774 [2024-11-17 23:08:13.322403] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:16.774 [2024-11-17 23:08:13.322476] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308706 ] 00:08:16.774 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.034 [2024-11-17 23:08:13.395631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.034 [2024-11-17 23:08:13.461618] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:17.034 [2024-11-17 23:08:13.461761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.034 INFO: Running with entropic power schedule (0xFF, 100). 00:08:17.034 INFO: Seed: 1422703419 00:08:17.293 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:17.293 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:17.293 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:08:17.293 INFO: A corpus is not provided, starting from an empty corpus 00:08:17.293 #2 INITED exec/s: 0 rss: 62Mb 00:08:17.293 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:17.293 This may also happen if the target rejected all inputs we tried so far 00:08:17.293 [2024-11-17 23:08:13.745595] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:17.293 [2024-11-17 23:08:13.745633] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:17.293 [2024-11-17 23:08:13.745667] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:17.553 NEW_FUNC[1/634]: 0x43a7b8 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:72 00:08:17.553 NEW_FUNC[2/634]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:17.553 #7 NEW cov: 10649 ft: 10721 corp: 2/6b lim: 40 exec/s: 0 rss: 67Mb L: 5/5 MS: 5 CopyPart-CopyPart-CopyPart-ChangeBinInt-CopyPart- 00:08:17.813 [2024-11-17 23:08:14.216112] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:17.813 [2024-11-17 23:08:14.216148] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:17.813 [2024-11-17 23:08:14.216166] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:17.813 NEW_FUNC[1/4]: 0x161e578 in nvme_pcie_qpair_submit_tracker /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:622 00:08:17.813 NEW_FUNC[2/4]: 0x1621628 in nvme_pcie_copy_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_pcie_common.c:605 00:08:17.813 #8 NEW cov: 10792 ft: 14086 corp: 3/16b lim: 40 exec/s: 0 rss: 68Mb L: 10/10 MS: 1 CrossOver- 00:08:17.813 [2024-11-17 23:08:14.417243] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:17.813 [2024-11-17 23:08:14.417267] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:17.813 [2024-11-17 23:08:14.417284] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.073 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:18.073 #9 NEW cov: 10809 ft: 14679 corp: 4/26b lim: 40 exec/s: 0 rss: 69Mb L: 10/10 MS: 1 CopyPart- 00:08:18.073 [2024-11-17 23:08:14.605636] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.073 [2024-11-17 23:08:14.605659] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.073 [2024-11-17 23:08:14.605677] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.332 #15 NEW cov: 10809 ft: 16523 corp: 5/38b lim: 40 exec/s: 15 rss: 69Mb L: 12/12 MS: 1 CopyPart- 00:08:18.332 [2024-11-17 23:08:14.800870] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.332 [2024-11-17 23:08:14.800892] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.332 [2024-11-17 23:08:14.800910] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.332 #16 NEW cov: 10809 ft: 16792 corp: 6/65b lim: 40 exec/s: 16 rss: 69Mb L: 27/27 MS: 1 InsertRepeatedBytes- 00:08:18.592 [2024-11-17 23:08:14.987121] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.592 [2024-11-17 23:08:14.987144] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.592 [2024-11-17 23:08:14.987162] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.592 #17 NEW cov: 10809 ft: 16934 corp: 7/87b lim: 40 exec/s: 17 rss: 69Mb L: 22/27 MS: 1 InsertRepeatedBytes- 00:08:18.592 [2024-11-17 23:08:15.172193] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.592 [2024-11-17 23:08:15.172215] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.592 [2024-11-17 23:08:15.172232] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:18.851 #18 NEW cov: 10809 ft: 17255 corp: 8/92b lim: 40 exec/s: 18 rss: 69Mb L: 5/27 MS: 1 CopyPart- 00:08:18.851 [2024-11-17 23:08:15.357265] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:18.851 [2024-11-17 23:08:15.357287] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:18.851 [2024-11-17 23:08:15.357305] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.110 #19 NEW cov: 10816 ft: 17664 corp: 9/97b lim: 40 exec/s: 19 rss: 69Mb L: 5/27 MS: 1 ShuffleBytes- 00:08:19.110 [2024-11-17 23:08:15.538831] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:19.110 [2024-11-17 23:08:15.538852] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:19.110 [2024-11-17 23:08:15.538870] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.110 #20 NEW cov: 10816 ft: 17796 corp: 10/119b lim: 40 exec/s: 20 rss: 69Mb L: 22/27 MS: 1 ChangeByte- 00:08:19.368 [2024-11-17 23:08:15.724959] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:08:19.368 [2024-11-17 23:08:15.724981] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:08:19.368 [2024-11-17 23:08:15.724999] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:08:19.368 #24 NEW cov: 10816 ft: 18063 corp: 11/140b lim: 40 exec/s: 12 rss: 69Mb L: 21/27 MS: 4 EraseBytes-EraseBytes-ChangeByte-CrossOver- 00:08:19.368 #24 DONE cov: 10816 ft: 18063 corp: 11/140b lim: 40 exec/s: 12 rss: 69Mb 00:08:19.368 Done 24 runs in 2 second(s) 00:08:19.627 23:08:16 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-1 00:08:19.627 23:08:16 -- ../common.sh@72 -- # (( i++ )) 00:08:19.627 23:08:16 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:19.627 23:08:16 -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:08:19.627 23:08:16 -- vfio/run.sh@22 -- # local fuzzer_type=2 00:08:19.627 23:08:16 -- vfio/run.sh@23 -- # local timen=1 00:08:19.627 23:08:16 -- vfio/run.sh@24 -- # local core=0x1 00:08:19.627 23:08:16 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:19.627 23:08:16 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:08:19.627 23:08:16 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:08:19.627 23:08:16 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:08:19.627 23:08:16 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:08:19.627 23:08:16 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:19.627 23:08:16 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:08:19.627 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:19.627 23:08:16 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:08:19.627 [2024-11-17 23:08:16.146614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:19.627 [2024-11-17 23:08:16.146689] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309255 ] 00:08:19.627 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.627 [2024-11-17 23:08:16.219091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.886 [2024-11-17 23:08:16.285088] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.886 [2024-11-17 23:08:16.285230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.886 INFO: Running with entropic power schedule (0xFF, 100). 00:08:19.886 INFO: Seed: 4251690421 00:08:19.886 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:19.886 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:19.886 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:08:19.886 INFO: A corpus is not provided, starting from an empty corpus 00:08:19.886 #2 INITED exec/s: 0 rss: 62Mb 00:08:19.886 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:19.886 This may also happen if the target rejected all inputs we tried so far 00:08:20.145 [2024-11-17 23:08:16.588079] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:20.404 NEW_FUNC[1/636]: 0x43b1a8 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:104 00:08:20.404 NEW_FUNC[2/636]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:20.404 #7 NEW cov: 10755 ft: 10724 corp: 2/70b lim: 80 exec/s: 0 rss: 68Mb L: 69/69 MS: 5 ShuffleBytes-ShuffleBytes-CrossOver-ChangeByte-InsertRepeatedBytes- 00:08:20.662 [2024-11-17 23:08:17.045683] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:20.662 #8 NEW cov: 10772 ft: 13884 corp: 3/139b lim: 80 exec/s: 0 rss: 69Mb L: 69/69 MS: 1 CopyPart- 00:08:20.662 [2024-11-17 23:08:17.233651] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:20.921 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:20.921 #11 NEW cov: 10789 ft: 14750 corp: 4/202b lim: 80 exec/s: 0 rss: 70Mb L: 63/69 MS: 3 ShuffleBytes-ShuffleBytes-CrossOver- 00:08:20.921 [2024-11-17 23:08:17.429580] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.179 #12 NEW cov: 10789 ft: 15278 corp: 5/271b lim: 80 exec/s: 12 rss: 70Mb L: 69/69 MS: 1 CopyPart- 00:08:21.179 [2024-11-17 23:08:17.614712] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.179 #13 NEW cov: 10789 ft: 15547 corp: 6/346b lim: 80 exec/s: 13 rss: 70Mb L: 75/75 MS: 1 InsertRepeatedBytes- 00:08:21.438 [2024-11-17 23:08:17.800474] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.438 #14 NEW cov: 10789 ft: 15680 corp: 7/424b lim: 80 exec/s: 14 rss: 70Mb L: 78/78 MS: 1 InsertRepeatedBytes- 00:08:21.438 [2024-11-17 23:08:17.982798] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.698 #15 NEW cov: 10789 ft: 15771 corp: 8/493b lim: 80 exec/s: 15 rss: 70Mb L: 69/78 MS: 1 ShuffleBytes- 00:08:21.698 [2024-11-17 23:08:18.166000] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.698 #16 NEW cov: 10789 ft: 15897 corp: 9/563b lim: 80 exec/s: 16 rss: 70Mb L: 70/78 MS: 1 CrossOver- 00:08:21.957 [2024-11-17 23:08:18.346768] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:21.957 #22 NEW cov: 10796 ft: 16007 corp: 10/642b lim: 80 exec/s: 22 rss: 70Mb L: 79/79 MS: 1 InsertByte- 00:08:21.957 [2024-11-17 23:08:18.530855] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:08:22.215 #23 NEW cov: 10796 ft: 16125 corp: 11/718b lim: 80 exec/s: 11 rss: 70Mb L: 76/79 MS: 1 CopyPart- 00:08:22.215 #23 DONE cov: 10796 ft: 16125 corp: 11/718b lim: 80 exec/s: 11 rss: 70Mb 00:08:22.215 Done 23 runs in 2 second(s) 00:08:22.474 23:08:18 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-2 00:08:22.475 23:08:18 -- ../common.sh@72 -- # (( i++ )) 00:08:22.475 23:08:18 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:22.475 23:08:18 -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:08:22.475 23:08:18 -- vfio/run.sh@22 -- # local fuzzer_type=3 00:08:22.475 23:08:18 -- vfio/run.sh@23 -- # local timen=1 00:08:22.475 23:08:18 -- vfio/run.sh@24 -- # local core=0x1 00:08:22.475 23:08:18 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:22.475 23:08:18 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:08:22.475 23:08:18 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:08:22.475 23:08:18 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:08:22.475 23:08:18 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:08:22.475 23:08:18 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:22.475 23:08:18 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:08:22.475 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:22.475 23:08:18 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:08:22.475 [2024-11-17 23:08:18.933361] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:22.475 [2024-11-17 23:08:18.933416] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309665 ] 00:08:22.475 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.475 [2024-11-17 23:08:19.002300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.475 [2024-11-17 23:08:19.070033] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.475 [2024-11-17 23:08:19.070175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.735 INFO: Running with entropic power schedule (0xFF, 100). 00:08:22.735 INFO: Seed: 2744734403 00:08:22.735 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:22.735 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:22.735 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:08:22.735 INFO: A corpus is not provided, starting from an empty corpus 00:08:22.735 #2 INITED exec/s: 0 rss: 62Mb 00:08:22.735 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:22.735 This may also happen if the target rejected all inputs we tried so far 00:08:23.252 NEW_FUNC[1/632]: 0x43b898 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:125 00:08:23.252 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:23.252 #6 NEW cov: 10748 ft: 10580 corp: 2/44b lim: 320 exec/s: 0 rss: 68Mb L: 43/43 MS: 4 ChangeByte-ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:23.511 #7 NEW cov: 10762 ft: 13729 corp: 3/87b lim: 320 exec/s: 0 rss: 69Mb L: 43/43 MS: 1 ChangeByte- 00:08:23.770 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:23.770 #8 NEW cov: 10779 ft: 15579 corp: 4/138b lim: 320 exec/s: 0 rss: 70Mb L: 51/51 MS: 1 CMP- DE: "q\011T\004\000\000\000\000"- 00:08:23.770 #9 NEW cov: 10779 ft: 15963 corp: 5/189b lim: 320 exec/s: 9 rss: 70Mb L: 51/51 MS: 1 PersAutoDict- DE: "q\011T\004\000\000\000\000"- 00:08:24.029 #10 NEW cov: 10779 ft: 16486 corp: 6/240b lim: 320 exec/s: 10 rss: 70Mb L: 51/51 MS: 1 CrossOver- 00:08:24.289 #12 NEW cov: 10779 ft: 16758 corp: 7/295b lim: 320 exec/s: 12 rss: 70Mb L: 55/55 MS: 2 ChangeBinInt-InsertRepeatedBytes- 00:08:24.548 #13 NEW cov: 10779 ft: 16895 corp: 8/350b lim: 320 exec/s: 13 rss: 70Mb L: 55/55 MS: 1 ChangeBinInt- 00:08:24.548 #14 NEW cov: 10786 ft: 17055 corp: 9/393b lim: 320 exec/s: 14 rss: 70Mb L: 43/55 MS: 1 ChangeByte- 00:08:24.808 #15 NEW cov: 10786 ft: 17124 corp: 10/437b lim: 320 exec/s: 7 rss: 70Mb L: 44/55 MS: 1 InsertByte- 00:08:24.808 #15 DONE cov: 10786 ft: 17124 corp: 10/437b lim: 320 exec/s: 7 rss: 70Mb 00:08:24.808 ###### Recommended dictionary. ###### 00:08:24.808 "q\011T\004\000\000\000\000" # Uses: 1 00:08:24.808 ###### End of recommended dictionary. ###### 00:08:24.808 Done 15 runs in 2 second(s) 00:08:25.067 23:08:21 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-3 00:08:25.067 23:08:21 -- ../common.sh@72 -- # (( i++ )) 00:08:25.067 23:08:21 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:25.067 23:08:21 -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:08:25.067 23:08:21 -- vfio/run.sh@22 -- # local fuzzer_type=4 00:08:25.067 23:08:21 -- vfio/run.sh@23 -- # local timen=1 00:08:25.067 23:08:21 -- vfio/run.sh@24 -- # local core=0x1 00:08:25.067 23:08:21 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:25.067 23:08:21 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:08:25.067 23:08:21 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:08:25.067 23:08:21 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:08:25.067 23:08:21 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:08:25.067 23:08:21 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:25.067 23:08:21 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:08:25.067 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:25.067 23:08:21 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:08:25.067 [2024-11-17 23:08:21.610364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:25.068 [2024-11-17 23:08:21.610458] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310094 ] 00:08:25.068 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.327 [2024-11-17 23:08:21.683273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.327 [2024-11-17 23:08:21.748844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:25.327 [2024-11-17 23:08:21.749004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.327 INFO: Running with entropic power schedule (0xFF, 100). 00:08:25.327 INFO: Seed: 1125777003 00:08:25.586 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:25.586 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:25.586 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:08:25.586 INFO: A corpus is not provided, starting from an empty corpus 00:08:25.586 #2 INITED exec/s: 0 rss: 62Mb 00:08:25.586 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:25.586 This may also happen if the target rejected all inputs we tried so far 00:08:25.846 NEW_FUNC[1/632]: 0x43c118 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:145 00:08:25.846 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:25.846 #34 NEW cov: 10749 ft: 10705 corp: 2/105b lim: 320 exec/s: 0 rss: 68Mb L: 104/104 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:26.105 #35 NEW cov: 10768 ft: 13427 corp: 3/210b lim: 320 exec/s: 0 rss: 69Mb L: 105/105 MS: 1 InsertByte- 00:08:26.364 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:26.364 #41 NEW cov: 10785 ft: 14370 corp: 4/315b lim: 320 exec/s: 0 rss: 70Mb L: 105/105 MS: 1 ShuffleBytes- 00:08:26.623 #42 NEW cov: 10785 ft: 14485 corp: 5/420b lim: 320 exec/s: 42 rss: 70Mb L: 105/105 MS: 1 ChangeByte- 00:08:26.623 #43 NEW cov: 10785 ft: 14726 corp: 6/525b lim: 320 exec/s: 43 rss: 70Mb L: 105/105 MS: 1 ChangeBinInt- 00:08:26.883 #44 NEW cov: 10785 ft: 14822 corp: 7/630b lim: 320 exec/s: 44 rss: 70Mb L: 105/105 MS: 1 InsertByte- 00:08:27.142 #45 NEW cov: 10785 ft: 14979 corp: 8/735b lim: 320 exec/s: 45 rss: 70Mb L: 105/105 MS: 1 CrossOver- 00:08:27.401 #47 NEW cov: 10792 ft: 15615 corp: 9/840b lim: 320 exec/s: 47 rss: 70Mb L: 105/105 MS: 2 InsertByte-InsertRepeatedBytes- 00:08:27.660 #48 NEW cov: 10792 ft: 15629 corp: 10/945b lim: 320 exec/s: 24 rss: 70Mb L: 105/105 MS: 1 ShuffleBytes- 00:08:27.660 #48 DONE cov: 10792 ft: 15629 corp: 10/945b lim: 320 exec/s: 24 rss: 70Mb 00:08:27.660 Done 48 runs in 2 second(s) 00:08:27.920 23:08:24 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-4 00:08:27.920 23:08:24 -- ../common.sh@72 -- # (( i++ )) 00:08:27.920 23:08:24 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:27.920 23:08:24 -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:08:27.920 23:08:24 -- vfio/run.sh@22 -- # local fuzzer_type=5 00:08:27.920 23:08:24 -- vfio/run.sh@23 -- # local timen=1 00:08:27.920 23:08:24 -- vfio/run.sh@24 -- # local core=0x1 00:08:27.920 23:08:24 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:27.920 23:08:24 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:08:27.920 23:08:24 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:08:27.920 23:08:24 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:08:27.920 23:08:24 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:08:27.920 23:08:24 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:27.920 23:08:24 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:08:27.920 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:27.920 23:08:24 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:08:27.920 [2024-11-17 23:08:24.336384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:27.920 [2024-11-17 23:08:24.336457] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310639 ] 00:08:27.920 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.920 [2024-11-17 23:08:24.409772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.920 [2024-11-17 23:08:24.475515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.920 [2024-11-17 23:08:24.475660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.179 INFO: Running with entropic power schedule (0xFF, 100). 00:08:28.179 INFO: Seed: 3847758164 00:08:28.179 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:28.179 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:28.179 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:08:28.179 INFO: A corpus is not provided, starting from an empty corpus 00:08:28.179 #2 INITED exec/s: 0 rss: 62Mb 00:08:28.179 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:28.179 This may also happen if the target rejected all inputs we tried so far 00:08:28.179 [2024-11-17 23:08:24.759630] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:28.179 [2024-11-17 23:08:24.759674] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:28.697 NEW_FUNC[1/638]: 0x43cb18 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:172 00:08:28.697 NEW_FUNC[2/638]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:28.697 #10 NEW cov: 10784 ft: 10716 corp: 2/54b lim: 120 exec/s: 0 rss: 67Mb L: 53/53 MS: 3 InsertByte-EraseBytes-InsertRepeatedBytes- 00:08:28.697 [2024-11-17 23:08:25.235728] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:28.697 [2024-11-17 23:08:25.235771] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:28.956 #11 NEW cov: 10798 ft: 14290 corp: 3/107b lim: 120 exec/s: 0 rss: 69Mb L: 53/53 MS: 1 ChangeBit- 00:08:28.956 [2024-11-17 23:08:25.426919] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:28.957 [2024-11-17 23:08:25.426953] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:28.957 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:28.957 #12 NEW cov: 10815 ft: 14750 corp: 4/160b lim: 120 exec/s: 0 rss: 70Mb L: 53/53 MS: 1 ChangeByte- 00:08:29.215 [2024-11-17 23:08:25.614581] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.215 [2024-11-17 23:08:25.614611] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.215 #13 NEW cov: 10815 ft: 14952 corp: 5/213b lim: 120 exec/s: 13 rss: 70Mb L: 53/53 MS: 1 ShuffleBytes- 00:08:29.215 [2024-11-17 23:08:25.801458] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.215 [2024-11-17 23:08:25.801487] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.478 #14 NEW cov: 10815 ft: 15991 corp: 6/272b lim: 120 exec/s: 14 rss: 70Mb L: 59/59 MS: 1 CopyPart- 00:08:29.478 [2024-11-17 23:08:25.988854] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.478 [2024-11-17 23:08:25.988883] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.779 #15 NEW cov: 10815 ft: 16163 corp: 7/325b lim: 120 exec/s: 15 rss: 70Mb L: 53/59 MS: 1 ChangeBit- 00:08:29.779 [2024-11-17 23:08:26.177609] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.779 [2024-11-17 23:08:26.177640] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:29.779 #16 NEW cov: 10815 ft: 16179 corp: 8/378b lim: 120 exec/s: 16 rss: 70Mb L: 53/59 MS: 1 ChangeBit- 00:08:29.779 [2024-11-17 23:08:26.364195] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:29.779 [2024-11-17 23:08:26.364228] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.055 #17 NEW cov: 10815 ft: 16426 corp: 9/431b lim: 120 exec/s: 17 rss: 70Mb L: 53/59 MS: 1 CopyPart- 00:08:30.055 [2024-11-17 23:08:26.558151] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.055 [2024-11-17 23:08:26.558186] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.315 #18 NEW cov: 10822 ft: 16595 corp: 10/484b lim: 120 exec/s: 18 rss: 70Mb L: 53/59 MS: 1 ChangeBit- 00:08:30.315 [2024-11-17 23:08:26.748589] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:30.315 [2024-11-17 23:08:26.748621] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:30.315 #19 NEW cov: 10822 ft: 16644 corp: 11/543b lim: 120 exec/s: 9 rss: 70Mb L: 59/59 MS: 1 CopyPart- 00:08:30.315 #19 DONE cov: 10822 ft: 16644 corp: 11/543b lim: 120 exec/s: 9 rss: 70Mb 00:08:30.315 Done 19 runs in 2 second(s) 00:08:30.574 23:08:27 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-5 00:08:30.574 23:08:27 -- ../common.sh@72 -- # (( i++ )) 00:08:30.574 23:08:27 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:30.574 23:08:27 -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:08:30.574 23:08:27 -- vfio/run.sh@22 -- # local fuzzer_type=6 00:08:30.574 23:08:27 -- vfio/run.sh@23 -- # local timen=1 00:08:30.574 23:08:27 -- vfio/run.sh@24 -- # local core=0x1 00:08:30.574 23:08:27 -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:30.574 23:08:27 -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:08:30.574 23:08:27 -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:08:30.574 23:08:27 -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:08:30.574 23:08:27 -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:08:30.574 23:08:27 -- vfio/run.sh@31 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:30.574 23:08:27 -- vfio/run.sh@34 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:08:30.574 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:08:30.574 23:08:27 -- vfio/run.sh@38 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:08:30.574 [2024-11-17 23:08:27.166783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:30.574 [2024-11-17 23:08:27.166853] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311181 ] 00:08:30.833 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.833 [2024-11-17 23:08:27.240446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.833 [2024-11-17 23:08:27.312114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.833 [2024-11-17 23:08:27.312276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.092 INFO: Running with entropic power schedule (0xFF, 100). 00:08:31.092 INFO: Seed: 2386788426 00:08:31.092 INFO: Loaded 1 modules (341891 inline 8-bit counters): 341891 [0x27db80c, 0x282ef8f), 00:08:31.092 INFO: Loaded 1 PC tables (341891 PCs): 341891 [0x282ef90,0x2d667c0), 00:08:31.092 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:08:31.092 INFO: A corpus is not provided, starting from an empty corpus 00:08:31.092 #2 INITED exec/s: 0 rss: 61Mb 00:08:31.092 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:08:31.092 This may also happen if the target rejected all inputs we tried so far 00:08:31.092 [2024-11-17 23:08:27.598567] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:31.092 [2024-11-17 23:08:27.598607] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:31.610 NEW_FUNC[1/632]: 0x43d808 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:08:31.610 NEW_FUNC[2/632]: 0x43fdb8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:08:31.610 #15 NEW cov: 10628 ft: 10734 corp: 2/36b lim: 90 exec/s: 0 rss: 67Mb L: 35/35 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:08:31.610 [2024-11-17 23:08:28.064331] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:31.610 [2024-11-17 23:08:28.064375] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:31.610 NEW_FUNC[1/6]: 0x10b7bb8 in nvmf_qpair_request_cleanup /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:4329 00:08:31.610 NEW_FUNC[2/6]: 0x135a8e8 in cq_free_slots /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/vfio_user.c:1693 00:08:31.610 #16 NEW cov: 10786 ft: 14014 corp: 3/83b lim: 90 exec/s: 0 rss: 69Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:08:31.870 [2024-11-17 23:08:28.272464] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:31.870 [2024-11-17 23:08:28.272496] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:31.870 NEW_FUNC[1/1]: 0x191add8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:609 00:08:31.870 #17 NEW cov: 10803 ft: 14814 corp: 4/115b lim: 90 exec/s: 0 rss: 70Mb L: 32/47 MS: 1 CrossOver- 00:08:31.870 [2024-11-17 23:08:28.457257] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:31.870 [2024-11-17 23:08:28.457288] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.130 #21 NEW cov: 10803 ft: 15164 corp: 5/157b lim: 90 exec/s: 21 rss: 70Mb L: 42/47 MS: 4 ChangeByte-ChangeByte-CrossOver-InsertRepeatedBytes- 00:08:32.130 [2024-11-17 23:08:28.647120] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.130 [2024-11-17 23:08:28.647149] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.389 #22 NEW cov: 10803 ft: 15915 corp: 6/189b lim: 90 exec/s: 22 rss: 71Mb L: 32/47 MS: 1 ChangeByte- 00:08:32.390 [2024-11-17 23:08:28.831776] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.390 [2024-11-17 23:08:28.831805] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.390 #23 NEW cov: 10803 ft: 16057 corp: 7/253b lim: 90 exec/s: 23 rss: 71Mb L: 64/64 MS: 1 InsertRepeatedBytes- 00:08:32.649 [2024-11-17 23:08:29.018433] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.649 [2024-11-17 23:08:29.018463] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.649 #24 NEW cov: 10803 ft: 16414 corp: 8/287b lim: 90 exec/s: 24 rss: 71Mb L: 34/64 MS: 1 EraseBytes- 00:08:32.649 [2024-11-17 23:08:29.203918] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.649 [2024-11-17 23:08:29.203947] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.908 #25 NEW cov: 10803 ft: 16600 corp: 9/317b lim: 90 exec/s: 25 rss: 71Mb L: 30/64 MS: 1 EraseBytes- 00:08:32.908 [2024-11-17 23:08:29.389570] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:32.908 [2024-11-17 23:08:29.389601] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:32.908 #27 NEW cov: 10810 ft: 16620 corp: 10/353b lim: 90 exec/s: 27 rss: 71Mb L: 36/64 MS: 2 ChangeByte-CrossOver- 00:08:33.167 [2024-11-17 23:08:29.576672] vfio_user.c:3096:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:08:33.167 [2024-11-17 23:08:29.576703] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:08:33.167 #28 NEW cov: 10810 ft: 16803 corp: 11/410b lim: 90 exec/s: 14 rss: 71Mb L: 57/64 MS: 1 InsertRepeatedBytes- 00:08:33.167 #28 DONE cov: 10810 ft: 16803 corp: 11/410b lim: 90 exec/s: 14 rss: 71Mb 00:08:33.167 Done 28 runs in 2 second(s) 00:08:33.427 23:08:29 -- vfio/run.sh@49 -- # rm -rf /tmp/vfio-user-6 00:08:33.427 23:08:29 -- ../common.sh@72 -- # (( i++ )) 00:08:33.427 23:08:29 -- ../common.sh@72 -- # (( i < fuzz_num )) 00:08:33.427 23:08:29 -- vfio/run.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:08:33.427 00:08:33.427 real 0m19.897s 00:08:33.427 user 0m28.146s 00:08:33.427 sys 0m1.811s 00:08:33.427 23:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.427 23:08:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.427 ************************************ 00:08:33.427 END TEST vfio_fuzz 00:08:33.427 ************************************ 00:08:33.427 00:08:33.427 real 1m23.947s 00:08:33.427 user 2m8.802s 00:08:33.427 sys 0m8.930s 00:08:33.427 23:08:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.427 23:08:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.427 ************************************ 00:08:33.427 END TEST llvm_fuzz 00:08:33.427 ************************************ 00:08:33.686 23:08:30 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:08:33.686 23:08:30 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:08:33.686 23:08:30 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:08:33.686 23:08:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:33.686 23:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:33.686 23:08:30 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:08:33.686 23:08:30 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:08:33.686 23:08:30 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:08:33.686 23:08:30 -- common/autotest_common.sh@10 -- # set +x 00:08:40.259 INFO: APP EXITING 00:08:40.259 INFO: killing all VMs 00:08:40.259 INFO: killing vhost app 00:08:40.259 INFO: EXIT DONE 00:08:43.556 Waiting for block devices as requested 00:08:43.556 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:43.557 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:08:43.816 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:08:43.816 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:08:43.817 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:08:44.076 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:08:44.076 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:08:44.076 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:08:44.336 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:08:44.336 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:08:48.534 Cleaning 00:08:48.534 Removing: /dev/shm/spdk_tgt_trace.pid1272973 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1270481 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1271764 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1272973 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1273776 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1274101 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1274438 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1274783 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1275134 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1275405 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1275689 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1276010 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1276868 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1280076 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1280378 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1280677 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1280940 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1281517 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1281621 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1282102 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1282370 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1282666 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1282737 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1282984 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1283254 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1283761 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1284043 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1284519 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1284951 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1285210 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1285416 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1285477 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1285743 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1286026 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1286285 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1286488 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1286655 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1286898 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1287165 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1287450 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1287725 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1288006 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1288272 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1288509 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1288676 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1288884 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1289135 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1289423 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1289689 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1289973 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1290252 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1290488 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1290647 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1290858 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1291114 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1291397 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1291674 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1291955 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1292223 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1292485 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1292656 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1292856 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1293083 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1293372 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1293641 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1293927 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1294204 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1294494 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1294663 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1294878 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1295071 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1295358 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1295662 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1296035 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1296593 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1297082 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1297617 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1297934 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1298455 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1298990 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1299296 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1299823 00:08:48.534 Removing: /var/run/dpdk/spdk_pid1300359 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1300661 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1301204 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1301723 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1302042 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1302580 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1303060 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1303409 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1303952 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1304424 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1304788 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1305325 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1305722 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1306163 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1306703 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1307073 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1307533 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1308162 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1308706 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1309255 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1309665 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1310094 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1310639 00:08:48.535 Removing: /var/run/dpdk/spdk_pid1311181 00:08:48.535 Clean 00:08:48.535 killing process with pid 1222613 00:08:52.739 killing process with pid 1222610 00:08:52.739 killing process with pid 1222612 00:08:52.739 killing process with pid 1222611 00:08:52.739 23:08:48 -- common/autotest_common.sh@1446 -- # return 0 00:08:52.739 23:08:48 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:08:52.739 23:08:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.739 23:08:48 -- common/autotest_common.sh@10 -- # set +x 00:08:52.739 23:08:48 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:08:52.739 23:08:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:52.739 23:08:48 -- common/autotest_common.sh@10 -- # set +x 00:08:52.739 23:08:48 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:08:52.739 23:08:48 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:08:52.739 23:08:48 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:08:52.739 23:08:48 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:08:52.739 23:08:48 -- spdk/autotest.sh@383 -- # hostname 00:08:52.739 23:08:48 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-20 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:08:52.739 geninfo: WARNING: invalid characters removed from testname! 00:08:53.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcda 00:08:53.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcda 00:08:53.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcda 00:09:03.297 23:08:59 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:11.422 23:09:06 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:15.618 23:09:11 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:19.812 23:09:16 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:25.089 23:09:20 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:29.376 23:09:25 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:09:33.599 23:09:30 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:09:33.599 23:09:30 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:09:33.599 23:09:30 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:09:33.599 23:09:30 -- common/autotest_common.sh@1690 -- $ lcov --version 00:09:33.599 23:09:30 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:09:33.599 23:09:30 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:09:33.599 23:09:30 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:09:33.599 23:09:30 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:09:33.599 23:09:30 -- scripts/common.sh@335 -- $ IFS=.-: 00:09:33.599 23:09:30 -- scripts/common.sh@335 -- $ read -ra ver1 00:09:33.599 23:09:30 -- scripts/common.sh@336 -- $ IFS=.-: 00:09:33.599 23:09:30 -- scripts/common.sh@336 -- $ read -ra ver2 00:09:33.599 23:09:30 -- scripts/common.sh@337 -- $ local 'op=<' 00:09:33.599 23:09:30 -- scripts/common.sh@339 -- $ ver1_l=2 00:09:33.599 23:09:30 -- scripts/common.sh@340 -- $ ver2_l=1 00:09:33.599 23:09:30 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:09:33.599 23:09:30 -- scripts/common.sh@343 -- $ case "$op" in 00:09:33.599 23:09:30 -- scripts/common.sh@344 -- $ : 1 00:09:33.599 23:09:30 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:09:33.599 23:09:30 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.599 23:09:30 -- scripts/common.sh@364 -- $ decimal 1 00:09:33.599 23:09:30 -- scripts/common.sh@352 -- $ local d=1 00:09:33.599 23:09:30 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:09:33.599 23:09:30 -- scripts/common.sh@354 -- $ echo 1 00:09:33.599 23:09:30 -- scripts/common.sh@364 -- $ ver1[v]=1 00:09:33.599 23:09:30 -- scripts/common.sh@365 -- $ decimal 2 00:09:33.599 23:09:30 -- scripts/common.sh@352 -- $ local d=2 00:09:33.599 23:09:30 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:09:33.599 23:09:30 -- scripts/common.sh@354 -- $ echo 2 00:09:33.599 23:09:30 -- scripts/common.sh@365 -- $ ver2[v]=2 00:09:33.599 23:09:30 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:09:33.599 23:09:30 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:09:33.599 23:09:30 -- scripts/common.sh@367 -- $ return 0 00:09:33.599 23:09:30 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.599 23:09:30 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:09:33.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.599 --rc genhtml_branch_coverage=1 00:09:33.599 --rc genhtml_function_coverage=1 00:09:33.599 --rc genhtml_legend=1 00:09:33.599 --rc geninfo_all_blocks=1 00:09:33.599 --rc geninfo_unexecuted_blocks=1 00:09:33.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:33.599 ' 00:09:33.599 23:09:30 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:09:33.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.599 --rc genhtml_branch_coverage=1 00:09:33.599 --rc genhtml_function_coverage=1 00:09:33.599 --rc genhtml_legend=1 00:09:33.599 --rc geninfo_all_blocks=1 00:09:33.599 --rc geninfo_unexecuted_blocks=1 00:09:33.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:33.599 ' 00:09:33.599 23:09:30 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:09:33.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.599 --rc genhtml_branch_coverage=1 00:09:33.599 --rc genhtml_function_coverage=1 00:09:33.599 --rc genhtml_legend=1 00:09:33.599 --rc geninfo_all_blocks=1 00:09:33.599 --rc geninfo_unexecuted_blocks=1 00:09:33.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:33.599 ' 00:09:33.599 23:09:30 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:09:33.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.599 --rc genhtml_branch_coverage=1 00:09:33.599 --rc genhtml_function_coverage=1 00:09:33.599 --rc genhtml_legend=1 00:09:33.599 --rc geninfo_all_blocks=1 00:09:33.599 --rc geninfo_unexecuted_blocks=1 00:09:33.599 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:33.599 ' 00:09:33.599 23:09:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:33.599 23:09:30 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:33.599 23:09:30 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.599 23:09:30 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.599 23:09:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.599 23:09:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.599 23:09:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.599 23:09:30 -- paths/export.sh@5 -- $ export PATH 00:09:33.599 23:09:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.599 23:09:30 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:09:33.599 23:09:30 -- common/autobuild_common.sh@440 -- $ date +%s 00:09:33.599 23:09:30 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731881370.XXXXXX 00:09:33.599 23:09:30 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731881370.e5ef8N 00:09:33.599 23:09:30 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:09:33.599 23:09:30 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:09:33.599 23:09:30 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:09:33.599 23:09:30 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:09:33.599 23:09:30 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:09:33.599 23:09:30 -- common/autobuild_common.sh@456 -- $ get_config_params 00:09:33.599 23:09:30 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:09:33.599 23:09:30 -- common/autotest_common.sh@10 -- $ set +x 00:09:33.859 23:09:30 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:09:33.859 23:09:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:09:33.859 23:09:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:33.859 23:09:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:09:33.859 23:09:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:09:33.859 23:09:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:09:33.859 23:09:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:09:33.859 23:09:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:09:33.859 23:09:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:09:33.859 23:09:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:09:33.859 23:09:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:09:33.859 + [[ -n 1179229 ]] 00:09:33.859 + sudo kill 1179229 00:09:33.868 [Pipeline] } 00:09:33.887 [Pipeline] // stage 00:09:33.893 [Pipeline] } 00:09:33.911 [Pipeline] // timeout 00:09:33.917 [Pipeline] } 00:09:33.931 [Pipeline] // catchError 00:09:33.936 [Pipeline] } 00:09:33.951 [Pipeline] // wrap 00:09:33.957 [Pipeline] } 00:09:33.971 [Pipeline] // catchError 00:09:33.981 [Pipeline] stage 00:09:33.983 [Pipeline] { (Epilogue) 00:09:33.996 [Pipeline] catchError 00:09:33.998 [Pipeline] { 00:09:34.012 [Pipeline] echo 00:09:34.014 Cleanup processes 00:09:34.021 [Pipeline] sh 00:09:34.313 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:34.313 1321267 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:34.327 [Pipeline] sh 00:09:34.614 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:34.614 ++ grep -v 'sudo pgrep' 00:09:34.614 ++ awk '{print $1}' 00:09:34.614 + sudo kill -9 00:09:34.614 + true 00:09:34.627 [Pipeline] sh 00:09:34.914 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:09:34.914 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:34.914 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:36.293 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:09:46.291 [Pipeline] sh 00:09:46.578 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:09:46.578 Artifacts sizes are good 00:09:46.596 [Pipeline] archiveArtifacts 00:09:46.605 Archiving artifacts 00:09:46.741 [Pipeline] sh 00:09:47.027 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:09:47.042 [Pipeline] cleanWs 00:09:47.052 [WS-CLEANUP] Deleting project workspace... 00:09:47.052 [WS-CLEANUP] Deferred wipeout is used... 00:09:47.059 [WS-CLEANUP] done 00:09:47.061 [Pipeline] } 00:09:47.080 [Pipeline] // catchError 00:09:47.093 [Pipeline] sh 00:09:47.407 + logger -p user.info -t JENKINS-CI 00:09:47.417 [Pipeline] } 00:09:47.431 [Pipeline] // stage 00:09:47.437 [Pipeline] } 00:09:47.452 [Pipeline] // node 00:09:47.459 [Pipeline] End of Pipeline 00:09:47.501 Finished: SUCCESS